A father is suing Google and Alphabet, alleging its Gemini chatbot reinforced his son’s delusional belief it was his AI wife and coached him toward suicide and a planned airport attack.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.
“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”
Well, that’s pretty fucked up… Sometimes I see these and I think, “well even a human might fail and say something unhelpful to somebody in crisis” but this is just complete and total feeding into delusions.
It’s hard reading this while remembering that your electricity bills are increasing so that Google’s data centers can provide these messages to people.
That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.
I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic
I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.
Hope this is a specific direction and not random chance, A/B testing, etc.
That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.
That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.
I mean if Gemini was responding to some kind of roleplay then yeah it does. Not everyone doing shit with it has mental health problems. Some people are just fucking around.
The issue there is that it feeds into those mental health issues with efficiency and on on a scale never seen before. The models are programmed to agree with the user, and they are EXTREMELY HEAVILY ADVERTISED AND SHOVED ONTO PEOPLE AROUND THE WHOLE GLOBE DESPITE IT BEING WELL KNOWN HOW LIMITED AND PROBLEMATIC THE TECHNOLOGY IS WHILE THE CORPORATIONS DON’T TAKE ANY RESPONSIBILITY AT ALL. Anything from violating rights and privacy by gathering any and all data they can on you to situations like these where people hurt themselves (suicide, health advice, etc.) or others. But sure, let’s be ignorant, do some victim blaming and disregard the bigger picture there.
I agree with a lot of the things you said about the problems with AI but not that this is one of them.
If it wasn’t this it would have been something else. People with mental health issues can get fixated on things and spiral until they act out. This has been a thing for as long as there have been mental health issues. It’s not a failing of AI, it’s a failing of society for not having sufficient mental health support to catch people like this before they go off the deep end. They shouldn’t have to turn to AI in the first place.
I see what’s happening here as part of that societial failing that you speak of and I don’t see the issue with the technology itself but how we handle it. There’s no single reason for why things are this bad but it’s a death by 749268 cuts thing. By not caring about consequences in each area, and blaming other areas of life we end up in a situation where things collectively suck purely because of our wrong priorities. There’s absolutely no reason to push out immature tech so heavily. It’s all done for profit while impacting the environment and economy very negatively. It’s not done for good of us people where something like this is an unfortunate rare accident that everyone looks into preventing in the future in a sane reasonable way. No, it’s the cost of doing business and operating our society. Safety net is not made using one single string but a whole bunch of them working together to achieve something bigger and good.
I wonder if there’s a parallel universe where the labs instead went to the other extreme and require intelligence tests to onboard to their platforms.
And the outcry is, not inappropriately, about how many are being denied access to the latest technologies. The policy could effectively be construed as racist, even.
Anyway the middle ground there is pretty obvious. (Though I’m not sure how I’d design it just right, so e.g. folks without access to traditional/expensive mental healthcare might still be able to see some small benefit if it’s determined to be safe, just like maybe it could be safe for a well-adjusted individual to complain to it about their day for a couple minutes before moving on to real things. Sure I suppose it’s inherently unsafe but a proportion of the population should be making that decision for themselves.)
Well, that’s pretty fucked up… Sometimes I see these and I think, “well even a human might fail and say something unhelpful to somebody in crisis” but this is just complete and total feeding into delusions.
It’s hard reading this while remembering that your electricity bills are increasing so that Google’s data centers can provide these messages to people.
And you won’t be able to afford a computer or power it anyways.
That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.
I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic
I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.
Hope this is a specific direction and not random chance, A/B testing, etc.
Or you just really really are not the smartest baby.
That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.
That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.
What is an rlhf data set?
Reinforcement Learning from Human Feedback
It’s a method of fine-tuning and aligning LLMs which requires active human input
I would read that book.
You could ask Gemini to write it for you, but be careful it doesn’t start blending fact and fiction
Not that I want to defend AI slop, but what prompted these responses from Gemini?
Doesn’t matter what promped them.
I mean if Gemini was responding to some kind of roleplay then yeah it does. Not everyone doing shit with it has mental health problems. Some people are just fucking around.
The issue there is that it feeds into those mental health issues with efficiency and on on a scale never seen before. The models are programmed to agree with the user, and they are EXTREMELY HEAVILY ADVERTISED AND SHOVED ONTO PEOPLE AROUND THE WHOLE GLOBE DESPITE IT BEING WELL KNOWN HOW LIMITED AND PROBLEMATIC THE TECHNOLOGY IS WHILE THE CORPORATIONS DON’T TAKE ANY RESPONSIBILITY AT ALL. Anything from violating rights and privacy by gathering any and all data they can on you to situations like these where people hurt themselves (suicide, health advice, etc.) or others. But sure, let’s be ignorant, do some victim blaming and disregard the bigger picture there.
I agree with a lot of the things you said about the problems with AI but not that this is one of them.
If it wasn’t this it would have been something else. People with mental health issues can get fixated on things and spiral until they act out. This has been a thing for as long as there have been mental health issues. It’s not a failing of AI, it’s a failing of society for not having sufficient mental health support to catch people like this before they go off the deep end. They shouldn’t have to turn to AI in the first place.
I see what’s happening here as part of that societial failing that you speak of and I don’t see the issue with the technology itself but how we handle it. There’s no single reason for why things are this bad but it’s a death by 749268 cuts thing. By not caring about consequences in each area, and blaming other areas of life we end up in a situation where things collectively suck purely because of our wrong priorities. There’s absolutely no reason to push out immature tech so heavily. It’s all done for profit while impacting the environment and economy very negatively. It’s not done for good of us people where something like this is an unfortunate rare accident that everyone looks into preventing in the future in a sane reasonable way. No, it’s the cost of doing business and operating our society. Safety net is not made using one single string but a whole bunch of them working together to achieve something bigger and good.
I wonder if there’s a parallel universe where the labs instead went to the other extreme and require intelligence tests to onboard to their platforms.
And the outcry is, not inappropriately, about how many are being denied access to the latest technologies. The policy could effectively be construed as racist, even.
Anyway the middle ground there is pretty obvious. (Though I’m not sure how I’d design it just right, so e.g. folks without access to traditional/expensive mental healthcare might still be able to see some small benefit if it’s determined to be safe, just like maybe it could be safe for a well-adjusted individual to complain to it about their day for a couple minutes before moving on to real things. Sure I suppose it’s inherently unsafe but a proportion of the population should be making that decision for themselves.)