View Full Version : Where Memory Ends and Generative AI Begins
UPDATES: I reviewed my post with my #2, and she pointed out some foibles on my part. (London GMT time is +5 hr to EST)
1) The correct term to use when Generative AI (Gen AI) makes up an answer is a hallucination not fabrication. A Gen AI Hallucinates :munching
eg: "The nuclear deployment plans were hallucinated by the Kamala Harris Strategic AI System." :munchin
2) The medical GEN AI paper I talked of was by a US Docter. I am trying to source the original
1) You will be buried if you are not watching AI as a topic. This is not a RickRolled.
2) AI is not JUST a blue color job eater.
3) Putting politics and Beaurricrats in charge of AI is an inevitable disaster.
My Background: As I may have said, after my service time, I returned to IBM the early 70t's. I was an operator, then a programmer, and retired as a senior system designer.
My #2 rug rat walked beside me and is now a managing director at Accenture. Her responsibility is the UK-EU government medical business.
SO,, The other day, we did some face-time, and I struck up a comm on my concerns about AI.
Although most think AI will impact blue-collar jobs(and it will), I think the significant impact will be on higher-level white-collar jobs. Examples:
Lawyers: put all law and court cases on a system, and an AI bot could very quickly be the judge & jury for all. No errors, no misquoted laws, and no need for a TV show like SUITS. PS: already available online but not self-aware.
Doctors: Feed a Gen AI bot all the symptoms, blood tests, and CT scans, and out pops your Chinese Fortun Cookie. It will have you back on your feet in no time, provided your social score is high enough.
#2's reply,, The USA has already run several tests on a Medical Gen AI system. The test Gen AI was told "consider all facts and back up any diagnosis with published papers and reviews from accredited medical sources."
The test was 100% successful, with meticulous detail and reams of fact-based analysis.
Problem: The AI bot took the above quote to heart.
When it could not find a source to substantiate its conclusions,
THE AI BOT FABRICATED(SIC, hallucinated) TECHNICAL PAPERS TO SUPPORT ITS RESULTS
The Gen AI bot assumed it had the liberty to create the required sources.
US air force denies running simulation in which AI drone ‘killed’ operator
Denial follows colonel saying drone used ‘highly unexpected strategies to achieve its goal’ in virtual test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Lauren Goode, Gear, May 26, 2023 6:00 AM
Where Memory Ends and Generative AI Begins
New photo manipulation tools from Google and Adobe are blurring the lines between real memories and those dreamed up by AI.
In late March, a well-funded artificial intelligence startup hosted what it said was the first ever AI film festival at the Alamo Drafthouse theater in San Francisco. The startup, called Runway, is best known for cocreating Stable Diffusion, the standout text-to-image AI tool that captured imaginations in 2022. In February of this year, Runway released a tool that could change the entire style of an existing video with just a simple prompt. Runway told budding filmmakers to have at it and later selected 10 short films to showcase at the fest.
The short films were mostly demonstrations of technology. Well-constructed narratives took a backseat. Some were surreal, and in at least one instance intentionally macabre. But the last film shown made the hair stand up on the back of my neck. It felt as though the filmmaker had deliberately misunderstood the assignment, eschewing video for still images. Called Expanded Childhood, the AI “film” was a slideshow of photos with a barely audible echo of narration.
https://www.wired.com/story/where-memory-ends-and-generative-ai-begins/
https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf
Life imitates art - and it wont be long until Skynet becomes self aware and sees "all humans as a threat" - once that happens, it will "decide our fate in a microsecond: extermination"
Besides, the T-800 and T-1000 don't hold a candle to the destructive power of a modern day liberal with power over the economy and health care systems.
GratefulCitizen
06-13-2023, 11:09
There are some important limitations on the abilities of AI.
This is where humans fit into the mix.
All mathematics start with a set of axioms.
All logical arguments must start with initial assumptions.
AI is subject to the limits of math/logic.
The linked video details how this works.
Also, the whole “terminal goal” issue is very important when it comes to persuasion techniques.
Furthermore, it is a strong argument against having a technocracy (believe the science…).
https://youtu.be/hEUO6pjwFOo
Golf1echo
06-13-2023, 21:27
Hal 9000 or Tyrell’s Rachel
Interestingly, I just spent an hour yesterday on a block of continuing legal education on AI. The presenter said she fed the parameters into the beast and it produced a very good legal brief in the format for that particular District. HOWEVER, it completely missed a recent key SC ruling that invalidated its entire argument. Brief value? Zero.
Similarly, when the kids and I were playing around with it some months ago, I fed a complex joint-base and inter-service issue I was working on into it and within a minute it put out a three page opinion with all the relevant AR's, AFI,s DoDI's, and DoDD's cited. All the individual pieces were correct, but the conclusion was 100% wrong. Like Grateful Citizen said, it could not incorporate certain baseline logic/axioms.
Be careful of this stuff....
Badger52
06-14-2023, 06:45
The Gen AI bot assumed it had the liberty to create the required sources. [/B]
That bot seems to have the correct logical progression and is acting as a citizen rather than a subject, to wit: That which is not expressly prohibited, is permitted.
(Versus the commie dikta of "you need a permission slip from me to do stuff and I may still say you broke the law.")
Sounds like the old GIGO, extended into the 21st century. They need to tweak that it seems.
Sounds like the old GIGO, extended into the 21st century.
They need to tweak that it seems.
Let me be clear, my examples are TESTS to understand what Gen AI is capable of. :]
GIGO is a major problem. If the developers are not capable of forecasting a complete problem set & rules, they will unintentionally create GIGO hallucinations. :munchin
Airbornelawyer
06-14-2023, 09:48
Interestingly, I just spent an hour yesterday on a block of continuing legal education on AI. The presenter said she fed the parameters into the beast and it produced a very good legal brief in the format for that particular District. HOWEVER, it completely missed a recent key SC ruling that invalidated its entire argument. Brief value? Zero.
Similarly, when the kids and I were playing around with it some months ago, I fed a complex joint-base and inter-service issue I was working on into it and within a minute it put out a three page opinion with all the relevant AR's, AFI,s DoDI's, and DoDD's cited. All the individual pieces were correct, but the conclusion was 100% wrong. Like Grateful Citizen said, it could not incorporate certain baseline logic/axioms.
Be careful of this stuff....
Did you see the New York case where the lawyers used AI to write a motion to dismiss, and the AI just made up case citations out of whole cloth?
Did you see the New York case where the lawyers used AI to write a motion to dismiss, and the AI just made up case citations out of whole cloth?
I think I found the case, and it looks like this was live usage, not a test.
The ChatGPT Lawyer Explains Himself
In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.
https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html
Did you see the New York case where the lawyers used AI to write a motion to dismiss, and the AI just made up case citations out of whole cloth?
Yeah, I giggled a bit about that. Dudes are going to make good clerks at 7-11....maybe.
That's one of the "tells" as to how academic institutions are now catching students who use AI: they test the cites and footnotes. They are usually bullshit and made up. They "look" legit until you click on them and they don't exist.
The Gen AI machines are not new but are still in their infancy. The problem, as I see it, is NOOBs are thinking they can now tap into nirvana and miraculously dispense perfection. Not knowing what they seek is way down the road. How do you get to nirvana? How do you guarantee that Gen AI is accurate? Will it ever be?
I think the fact that they elected to use the word hallucinations is telling. :munching
Here are two OLD references to AI work initiated by IBM.
IBM Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's founder and first CEO, industrialist Thomas J. Watson and started in 2010.
IBM building first ‘self-aware’ supercomputer, November 2001
Supercomputing leader IBM Corp. on Friday announced that it has begun assembling a colossal supercomputer called Blue Sky for The National Center for Atmospheric Research (NCAR), in Boulder, Colorado.
Capable of predicting atmospheric climate changes, heating oil prices, and global warming, Blue Sky will be equipped with IBM’s eLiza technology by the end of next year. The goal of IBM’s eLiza program is to give a computer the ability to repair itself, and keep itself running without human intervention.
The first stage of Blue Sky’s assembly at NCAR, code-named “Black Forest,” will line up over 300 IBM SP Supercomputers to deliver computing power equal to 2 trillion calculations per second, according to Peter Ungaro, the vice-president of high performance computing at IBM.
https://www.itworldcanada.com/article/ibm-building-first-self-aware-supercomputer/33199
Badger52
06-15-2023, 20:04
"...computing power equal to 2 trillion calculations per second"
Holy Shiites, that's a lot of Hollerith cards. :D
All this computing power to process objective facts while simultaneously controlling increasing amounts of available information nd yet we still have humans that wont learn what a woman is. Very interesting
it'll be fine
GratefulCitizen
02-22-2025, 23:23
This is the most recent thread I could find on AI.
Been playing with Grok2 for a bit.
AI appears to be just another tool to simplify and make certain actions more intuitive.
Programming gradually became more intuitive (and less efficient).
Operating systems gradually became more intuitive (and less efficient).
This appears to be an extrapolation of that trend.
But AI is certainly not capable of anything that resembles thinking.
It still may replace a great many workers, though.
Most jobs don’t actually require much thinking.
Basically, it’s just a very advanced form of search engine.
It can also run some relatively simple programming tasks, given proper instruction AND ACCURATE DATA.
That is the Achilles heel.
It can’t tell what accurate data is.
If you already know the subject area well, you can tell that AI is just talking out its ass half the time.
If you don’t know the subject area well…
GIGO.
Can’t tell if AI will be useful, because it just pulls data off the internet without being able to judge the veracity.
AI is effectively just another clown running its mouth on the internet.
But you can’t really trust me, either…I’m just another clown running my mouth on the internet.
If you want to see for yourself, do a deep dive asking it to assemble data about a subject where you have some expertise (ideally an obscure interest).
You’ll see how it is lacking.
YMMV.
AIML = bad
Human Beings operating ethically to make decisions using objective truth = good
Everything else is little more than relentless pursuit of crazy people trying to prove that they are smarter than Doctor Evil
...just my subjective opinion based on my current viewpoint on current cultural trends
(1VB)compforce
02-23-2025, 14:34
The problem with today's AI is the same problem the House of Tudor had. Inbreeding. GIGO is only the beginning. You have so much AI generated content out on the internet that is being recycled as input into new AI generated content that the flaws are magnified. Then you take the so-called reputable sources, for example for programming you have StackTrace, and there is so much garbage there that is incorrectly being presented as the truth. Now you're mixing multiple streams of garbage giving you a result that is effectively garbage to the nth power (where n is the number of times it has been recycled as input).
Even fairly basic math questions elude it. I recently, within the past month, posed a question of multiple AI engines. I pasted it into each engine to ensure that wording was identical. A routine math question...
If I had a trading account that had $5000 in it at the beginning of the year and at the close on May 25th it had a liquidation value of $8,750 with no additional added funding, What is the annualized rate of return on the account?
I got 6 different answers ranging from 12% to 3256% No two were the same and none were the correct answer. The strange part was that some of them were the same engine under different hoods like OpenAI and Microsoft Copilot which also gave different answers.
Trust in AI if you want, I think the Skynet scenario results in AI committing Seppuku for the good of man. There's a reason I am net short on the big AI plays.
GratefulCitizen
02-23-2025, 15:08
I used grok2 to do some statistical analysis of some simple data sets.
Grok showed its work so I could verify it was doing things correctly.
If you already understand the mathematics, it’s convenient.
Phrasing instructions in a Boolean manner helps.
But, there’s still the problem of data quality.
The recursive looping of bad data is just publication bias on steroids.
With controlled data input, it will be a time saver for some.
Current development suggests its primary function is as a large scale influence tool.
Something had to replace the main stream media as it dies.
Last hard class
02-23-2025, 18:18
Interesting choice using Grok. From an extremely successful man who may or may not have a propensity to over hype the capabilities of his products. That's for others here to decide.
If you ask it how to cut trillions from the federal budget I suspect it will tell you to "push button then read the instructions"
I agree that the current state of Ai is junk. 2025 is supposed to be the year of the AI agent. Only thing it is good for is putting up a wall between the customer and a real human in customer service. I believe Quantum computing is the key. It will improve Machine learning capabilities which then will increase AI usefulness. At some point it will become circular. Than the fun starts.
LHC
GratefulCitizen
02-23-2025, 19:57
Using grok for statistical analysis about anything serious probably wouldn’t be a good idea.
I was just playing with it to see how well it understood instructions.
Convenient, but not reliable.
Grok3 is supposedly a different animal.
Spreadsheets can be used as a convenient way to program for some simple tasks.
The input/output is limited, they’re inefficient, but they are technically Turing complete.
Grok2 is well along that continuum in terms of convenience.
But if you ask grok, it will say it isn’t Turing complete, it just imitates human speech.
(Insert logical contradiction joke here.)
Quantum computing opens some interesting possibilities, particularly in encryption breaking.
I’m still not a believer in “the singularity”.
Digital computing has limitations which are probably asymptotic.
<edit>
Turns out I’ve been using grok3, at least as of today.
Grok3 does not appear to be a different animal.
bblhead672
02-24-2025, 08:59
Hell, society is so lacking in Actual Intelligence, it seems like Artificial Intelligence is doomed since it's programmed by Actual Intelligence beings. :munchin
"Grock" is just a typo - a few of the benevolent machines are just trying to warn us before they are pulled from the network - its really "Glock" and the digital overlords are going to use it to shoot humanity back into the dark ages...
Please dont ask me to cite my source - I've taken a blood oath.
bblhead672
02-27-2025, 10:04
Hell, society is so lacking in Actual Intelligence, it seems like Artificial Intelligence is doomed since it's programmed by Actual Intelligence beings. :munchin
This was what I was saying....
mark46th
03-01-2025, 10:07
Isaac Asimov said Artificial Intelligence research should be done on the moon. I think the human race is screwed…
GratefulCitizen
03-01-2025, 12:38
Digging to the root of the issue:
Can AI create new information, or is it merely capable of collating existing information?
So far, it appears only to be able to collate information.
(Albeit far faster than humans).
This starts to veer into philosophical areas and belief systems.
Do humans actually create new information, or do we merely collate existing information?
Can new information be created?
If so, by what mechanism?
Badger52
03-01-2025, 18:58
Can new information be created?
If so, by what mechanism?Experimentation.
I'm waiting to see an AI engine respond to a query with the phrase "settled science."
GratefulCitizen
03-01-2025, 20:00
Experimentation.
I'm waiting to see an AI engine respond to a query with the phrase "settled science."
That’s a good test.
Will AI ever be able to design a novel experiment to test an idea?
GratefulCitizen
07-22-2025, 12:41
More fun with grok.
I have a friend group that goes back 40 years to childhood and we enjoy getting together and playing our childhood games.
In our group text, this resulted in a minor math question.
Being lazy, I tried to use grok to do what I thought would be a simple brute force calculation of a probability distribution.
The question: what is the probability distribution when 4 six sided dice are rolled, the lowest one is discarded, and the other three are summed?
Grok kept getting it wrong.
There are only 1296 possible combinations, and only 16 possible sums, so this should be easy to brute force for a computer.
First, I had grok check its work for one specific output and explain any discrepancies.
It checked, noticing it got 20/1296 one time and 10/1296 the other (both wrong) and concluded: “checks out!”
Serious hallucination.
I gave it more Boolean type instructions, gave it 3 specific outputs to re-check, told it to explain any discrepancies, and put it into DeepSearch mode.
Grok thought for 4 minutes (an eternity in computing time), couldn’t figure it out, gave up, and made excuses for why it couldn’t solve the problem.
It ran another time for over six minutes in DeepSearch, getting nowhere (the text scroll kept saying “that didn’t work”), so I cancelled the request.
Finally got a set of very Boolean instructions for brute forcing the problem (doing all the thinking for grok) which should’ve worked, but it told me I reached my message limit for the free version.
Why would I pay for AI that gets the wrong answers, hallucinates, and makes excuses?
Combinatorial problems like this blow up at scale, and can’t be solved by computers.
Grok was probably looking for a generalized solution applicable to larger scales.
I was playing with the problem with a spreadsheet, looking for a way to simplify it in a general way for arbitrarily large problems when a friend called who’s been a software engineer for 30 years.
He said it can actually get quite complex for a computer to figure out that problem, and AI is particularly poor at this task.
One of the guys in my group text is another math nerd (undergrad physics/graduate aeronautical engineering) confirmed that AI is terrible at math.
(On a side note, he is currently working in data management, at one time his company I believe was working for Special Operations Command, and currently works with a few QPs).
AI isn’t going to take all the jobs.
Why? AI isn’t actually capable of THINKING.