Professional Soldiers ®

Professional Soldiers ® (http://www.professionalsoldiers.com/forums/index.php)
-   General Discussions (http://www.professionalsoldiers.com/forums/forumdisplay.php?f=46)
-   -   Where Memory Ends and Generative AI Begins (http://www.professionalsoldiers.com/forums/showthread.php?t=56285)

JJ_BPK 06-13-2023 07:29

Where Memory Ends and Generative AI Begins
 
Quote:

UPDATES: I reviewed my post with my #2, and she pointed out some foibles on my part. (London GMT time is +5 hr to EST)

1) The correct term to use when Generative AI (Gen AI) makes up an answer is a hallucination not fabrication. A Gen AI Hallucinates :munching

eg: "The nuclear deployment plans were hallucinated by the Kamala Harris Strategic AI System." :munchin

2) The medical GEN AI paper I talked of was by a US Docter. I am trying to source the original


1) You will be buried if you are not watching AI as a topic. This is not a RickRolled.

2) AI is not JUST a blue color job eater.

3) Putting politics and Beaurricrats in charge of AI is an inevitable disaster.


My Background: As I may have said, after my service time, I returned to IBM the early 70t's. I was an operator, then a programmer, and retired as a senior system designer.

My #2 rug rat walked beside me and is now a managing director at Accenture. Her responsibility is the UK-EU government medical business.


SO,, The other day, we did some face-time, and I struck up a comm on my concerns about AI.

Although most think AI will impact blue-collar jobs(and it will), I think the significant impact will be on higher-level white-collar jobs. Examples:

Lawyers: put all law and court cases on a system, and an AI bot could very quickly be the judge & jury for all. No errors, no misquoted laws, and no need for a TV show like SUITS. PS: already available online but not self-aware.

Doctors: Feed a Gen AI bot all the symptoms, blood tests, and CT scans, and out pops your Chinese Fortun Cookie. It will have you back on your feet in no time, provided your social score is high enough.


#2's reply,, The USA has already run several tests on a Medical Gen AI system. The test Gen AI was told "consider all facts and back up any diagnosis with published papers and reviews from accredited medical sources."

The test was 100% successful, with meticulous detail and reams of fact-based analysis.

Problem: The AI bot took the above quote to heart.


When it could not find a source to substantiate its conclusions,
THE AI BOT FABRICATED(SIC, hallucinated) TECHNICAL PAPERS TO SUPPORT ITS RESULTS
The Gen AI bot assumed it had the liberty to create the required sources.



Quote:

Originally Posted by At the 5-sided outhouse
US air force denies running simulation in which AI drone ‘killed’ operator

Denial follows colonel saying drone used ‘highly unexpected strategies to achieve its goal’ in virtual test

https://www.theguardian.com/us-news/...simulated-test



Quote:


Lauren Goode, Gear, May 26, 2023 6:00 AM
Where Memory Ends and Generative AI Begins

New photo manipulation tools from Google and Adobe are blurring the lines between real memories and those dreamed up by AI.

In late March, a well-funded artificial intelligence startup hosted what it said was the first ever AI film festival at the Alamo Drafthouse theater in San Francisco. The startup, called Runway, is best known for cocreating Stable Diffusion, the standout text-to-image AI tool that captured imaginations in 2022. In February of this year, Runway released a tool that could change the entire style of an existing video with just a simple prompt. Runway told budding filmmakers to have at it and later selected 10 short films to showcase at the fest.

The short films were mostly demonstrations of technology. Well-constructed narratives took a backseat. Some were surreal, and in at least one instance intentionally macabre. But the last film shown made the hair stand up on the back of my neck. It felt as though the filmmaker had deliberately misunderstood the assignment, eschewing video for still images. Called Expanded Childhood, the AI “film” was a slideshow of photos with a barely audible echo of narration.

https://www.wired.com/story/where-me...ive-ai-begins/


https://www.europarl.europa.eu/RegDa...)729512_EN.pdf

Box 06-13-2023 08:55

Life imitates art - and it wont be long until Skynet becomes self aware and sees "all humans as a threat" - once that happens, it will "decide our fate in a microsecond: extermination"

Besides, the T-800 and T-1000 don't hold a candle to the destructive power of a modern day liberal with power over the economy and health care systems.

GratefulCitizen 06-13-2023 11:09

There are some important limitations on the abilities of AI.
This is where humans fit into the mix.

All mathematics start with a set of axioms.
All logical arguments must start with initial assumptions.

AI is subject to the limits of math/logic.
The linked video details how this works.

Also, the whole “terminal goal” issue is very important when it comes to persuasion techniques.
Furthermore, it is a strong argument against having a technocracy (believe the science…).

https://youtu.be/hEUO6pjwFOo

Golf1echo 06-13-2023 21:27

2 Attachment(s)
Hal 9000 or Tyrell’s Rachel

JimP 06-14-2023 04:27

Interestingly, I just spent an hour yesterday on a block of continuing legal education on AI. The presenter said she fed the parameters into the beast and it produced a very good legal brief in the format for that particular District. HOWEVER, it completely missed a recent key SC ruling that invalidated its entire argument. Brief value? Zero.

Similarly, when the kids and I were playing around with it some months ago, I fed a complex joint-base and inter-service issue I was working on into it and within a minute it put out a three page opinion with all the relevant AR's, AFI,s DoDI's, and DoDD's cited. All the individual pieces were correct, but the conclusion was 100% wrong. Like Grateful Citizen said, it could not incorporate certain baseline logic/axioms.

Be careful of this stuff....

Badger52 06-14-2023 06:45

Quote:

Originally Posted by JJ_BPK (Post 677279)
The Gen AI bot assumed it had the liberty to create the required sources. [/B]

That bot seems to have the correct logical progression and is acting as a citizen rather than a subject, to wit: That which is not expressly prohibited, is permitted.
(Versus the commie dikta of "you need a permission slip from me to do stuff and I may still say you broke the law.")


Sounds like the old GIGO, extended into the 21st century. They need to tweak that it seems.

JJ_BPK 06-14-2023 07:55

Quote:

Originally Posted by Badger52 (Post 677285)
Sounds like the old GIGO, extended into the 21st century.
They need to tweak that it seems.

Let me be clear, my examples are TESTS to understand what Gen AI is capable of. :]

GIGO is a major problem. If the developers are not capable of forecasting a complete problem set & rules, they will unintentionally create GIGO hallucinations. :munchin

Airbornelawyer 06-14-2023 09:48

Quote:

Originally Posted by JimP (Post 677284)
Interestingly, I just spent an hour yesterday on a block of continuing legal education on AI. The presenter said she fed the parameters into the beast and it produced a very good legal brief in the format for that particular District. HOWEVER, it completely missed a recent key SC ruling that invalidated its entire argument. Brief value? Zero.

Similarly, when the kids and I were playing around with it some months ago, I fed a complex joint-base and inter-service issue I was working on into it and within a minute it put out a three page opinion with all the relevant AR's, AFI,s DoDI's, and DoDD's cited. All the individual pieces were correct, but the conclusion was 100% wrong. Like Grateful Citizen said, it could not incorporate certain baseline logic/axioms.

Be careful of this stuff....

Did you see the New York case where the lawyers used AI to write a motion to dismiss, and the AI just made up case citations out of whole cloth?

JJ_BPK 06-14-2023 11:18

Quote:

Originally Posted by Airbornelawyer (Post 677287)
Did you see the New York case where the lawyers used AI to write a motion to dismiss, and the AI just made up case citations out of whole cloth?

I think I found the case, and it looks like this was live usage, not a test.


The ChatGPT Lawyer Explains Himself

In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.

https://www.nytimes.com/2023/06/08/n...sanctions.html

JimP 06-15-2023 04:29

Quote:

Originally Posted by Airbornelawyer (Post 677287)
Did you see the New York case where the lawyers used AI to write a motion to dismiss, and the AI just made up case citations out of whole cloth?

Yeah, I giggled a bit about that. Dudes are going to make good clerks at 7-11....maybe.

That's one of the "tells" as to how academic institutions are now catching students who use AI: they test the cites and footnotes. They are usually bullshit and made up. They "look" legit until you click on them and they don't exist.

JJ_BPK 06-15-2023 06:16

The Gen AI machines are not new but are still in their infancy. The problem, as I see it, is NOOBs are thinking they can now tap into nirvana and miraculously dispense perfection. Not knowing what they seek is way down the road. How do you get to nirvana? How do you guarantee that Gen AI is accurate? Will it ever be?

I think the fact that they elected to use the word hallucinations is telling. :munching

Here are two OLD references to AI work initiated by IBM.

Quote:

Originally Posted by IBM's Watson

IBM Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's founder and first CEO, industrialist Thomas J. Watson and started in 2010.

Quote:

Originally Posted by IBM's Blue Sky

IBM building first ‘self-aware’ supercomputer, November 2001

Supercomputing leader IBM Corp. on Friday announced that it has begun assembling a colossal supercomputer called Blue Sky for The National Center for Atmospheric Research (NCAR), in Boulder, Colorado.

Capable of predicting atmospheric climate changes, heating oil prices, and global warming, Blue Sky will be equipped with IBM’s eLiza technology by the end of next year. The goal of IBM’s eLiza program is to give a computer the ability to repair itself, and keep itself running without human intervention.

The first stage of Blue Sky’s assembly at NCAR, code-named “Black Forest,” will line up over 300 IBM SP Supercomputers to deliver computing power equal to 2 trillion calculations per second, according to Peter Ungaro, the vice-president of high performance computing at IBM.

https://www.itworldcanada.com/articl...computer/33199


Badger52 06-15-2023 20:04

Quote:

"...computing power equal to 2 trillion calculations per second"
Holy Shiites, that's a lot of Hollerith cards. :D

Box 06-16-2023 07:58

All this computing power to process objective facts while simultaneously controlling increasing amounts of available information nd yet we still have humans that wont learn what a woman is. Very interesting


it'll be fine

GratefulCitizen 02-22-2025 23:23

This is the most recent thread I could find on AI.

Been playing with Grok2 for a bit.
AI appears to be just another tool to simplify and make certain actions more intuitive.

Programming gradually became more intuitive (and less efficient).
Operating systems gradually became more intuitive (and less efficient).

This appears to be an extrapolation of that trend.
But AI is certainly not capable of anything that resembles thinking.

It still may replace a great many workers, though.
Most jobs don’t actually require much thinking.

Basically, it’s just a very advanced form of search engine.
It can also run some relatively simple programming tasks, given proper instruction AND ACCURATE DATA.

That is the Achilles heel.
It can’t tell what accurate data is.

If you already know the subject area well, you can tell that AI is just talking out its ass half the time.
If you don’t know the subject area well…

GIGO.
Can’t tell if AI will be useful, because it just pulls data off the internet without being able to judge the veracity.

AI is effectively just another clown running its mouth on the internet.
But you can’t really trust me, either…I’m just another clown running my mouth on the internet.

If you want to see for yourself, do a deep dive asking it to assemble data about a subject where you have some expertise (ideally an obscure interest).
You’ll see how it is lacking.

YMMV.

Box 02-23-2025 14:13

AIML = bad
Human Beings operating ethically to make decisions using objective truth = good

Everything else is little more than relentless pursuit of crazy people trying to prove that they are smarter than Doctor Evil

...just my subjective opinion based on my current viewpoint on current cultural trends


All times are GMT -6. The time now is 02:25.


Copyright 2004-2022 by Professional Soldiers ®