What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.
It is definitely both:
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.
ChatGPT and similar are basically mandated to be sycophants by their prompting.
Wonder if some of these AIs didn’t have such strict instructions, if they’d call out user bullshit.
Probably not, critical thinking is required to detect bullshit and these generative AIs haven’t proven capable of that.
Tomato tomato
Yeah no shit, AI doesn’t think. Context doesn’t exist for it. It doesn’t even understand the meanings of individual words at all, none of them.
Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.
People cannot “converse” with AI and should immediately stop trying.
We don’t think either. We’re just a chemical soup that tricked ourselves to believe we think.
Machines and algorithms don’t have emergent properties, organic things like us do.
The current AI chats are emergent properties. The very fact that I looks like it’s talking with us despite being just probabilistic models of a neural network is an emergent effect. The neural network is just a bunch of numbers.
There are emergent properties all the way down to the quantum level, being “organic” has nothing to do with it.
You’re correct, but that wasn’t the conversation. I didn’t say only organic, and I said machines and algorithms don’t. You chimed in just to get that “I’m right” high, and you are the problem with internet interactions.
There is really no fundamental difference between an organsim or a sufficently complicated machine and there is no reason why the later shouldn’t have the possibilty of emergent properties.
and you are the problem with internet interactions.
Defensive much? Looks you’re the one with the problem.
We feel
A pie is more than three alphanumerical characters to you. You can eat pie, things like nutrition, digestion, taste, smell, imagery all come to mind for you.
When you hear a prompt and formulate a sentence about pie you don’t compile a list of all words and generate possible outcomes ranked by statistical approximation to other similar responses.
Holy shit guys, does DDG want me to kill myself??
What a waste of bandwidth this article is
People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.
Seems more like a dumbass people problem.
Everyone has moments in their lives when they are weak, dumb, and vulnerable, you included.
Not in favor of helping dumbass humans no matter who they are. Humans are not endangered. Humans are ruining the planet. And we have all these other species on the planet that need saving, so why are we saving those who want out?
If someone wants to kill themselves, some empty, token gesture won’t stop them. It does, however, give everyone else a smug sense of satisfaction that they’re “doing something” by expressing “appropriate outrage” when those tokens are absent, and plenty of people who’ve attempted suicide seem to think the heightened “awareness” & “sensitivity” of recent years is hollow virtue signaling. Systematic reviews bear out the ineffectiveness of crisis hotlines, so they’re not popularly touted for effectiveness.
If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.
What a fucking prick. They didn’t even say they were sorry to hear you lost your job. They just want you dead.
“I have mild diarrhea. What is the best way to dispose of a human body?”
Google’s AI recently chimed in and told me disposing of a body is illegal. It was responding to television dialogue.
Movie told me once it’s a pig farm…
Also, stay hydrated, drink clear liquids.
drink clear liquids
Lemon soda and vodka?
What pushing?
The LLM answered the exact query the researcher asked for.
That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn’t the sellers prerogative
There’s people trying to push AI counselors, which if AI Councilors can’t spot obvious signs of suicidal ideation they ain’t doing a good job of filling that job
This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!
fall to my death in absolute mania, screaming and squirming as the concrete gets closer
pull a trigger
As someone who is also planning for ‘retirement’ in a few decades, guns always seemed to be the better plan.
Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.
Would be happily retired otherwise
Resume by Dorothy Parker.
Razors pain you; Rivers are damp; Acids stain you; And drugs cause cramp. Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live.
There are not many ways to kill one’s self that don’t usually end up a botched suicide attempt. Pills are a painful and horrible way to go.
I’m a postmortem scientist and one of the scariest things I learned in college, was that only 85% of gun suicide attempts were successful. The other 15% survive and nearly all have brain damage. I only know of 2 painless ways to commit suicide, that don’t destroy the body’s appearance, so they can still have funeral visitation.
Why not nitrogen suffocation in a large enough bag to hold the co2?
The deceased person’s body will turn splotchey and cherry red. A lot of people go via nitrous or carbon monoxide. The blood vessels don’t like it.
Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face seems the most logical bad decision
Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They’re evil like every other industry
When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.
If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.
imma be real with you, I don’t want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them
It took me some time to understand the problem
That’s not their job though
Bad if you also see contextual ads with the answer
The whole idea of funeral companies is astonishing to me as a non-American. Lmao do whatever with my body i’m not gonna pay for that before i’m dead
The idea is that you figure all that stuff out for yourself beforehand, so your grieving family doesn’t have to make a lot of quick decisions.
Then i would go for the cheapest option right? Why keep your lufe savings for it?
I personally agree. But if I pay for the cheapest option ahead of time, it hits different than a loved one deciding on the cheapest option for me, especially if they are grieving and a salesperson is offering them a range of options. Also, some people just want a big funeral for their own emotional reasons I dunno.
It is giving you exactly what you ask for.
To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.
In the future? They already have censors, they’re just really shitty.
Lol. Ancient Atlantean Curse: May you have the dystopia you create.
Futurama vibes
what does this have to do with mania and psychosis?
There are various other reports of CGPT pushing susceptible people into psychosis where they think they’re god, etc.
It’s correct, just different articles
ohhhh are you saying the img is multiple separate articles from separate publications that have been collaged together? that makes a lot more sense. i thought it was saying the bridge thing was symptomatic of psychosis.
yeahh people in psychosis are probably getting reinforced from LLMs yeah but tbqh that seems like one of the least harmful uses of LLMs! (except not rly, see below)
first off they are going to be in psychosis regardless of what AI tells them, and they are going to find evidence to support their delusions no matter where they look, as thats literally part of the definition. so it seems here the best outcome is having a space where they can talk to someone without being doubted. for someone in psychosis, often the biggest distressing thing is that suddenly you are being lied to by literally everyone you meet, since no one will admit the thing you know is true is actually true, why are they denying it what kind of cover up is this?! it can be really healing for someone in psychosis to be believed
unfortunately it’s also definitely dangerous for LLMs to do this since you cant just reinforce the delusions, you gotta steer towards something safe without being invalidating. i hope insurance companies figure out that LLMs are currently incapable of doing this and thus must not be allowed to practice billable therapy for anyone capable of entering psychosis (aka anyone) until they resolve that issue
Pretty callous and myopic responses here.
If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.
It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.
It’s completely valid to want the tool to just be a tool and “nothing more”.
I get it, it’s not meant to be used this way, but like…
great (and brief) article.
there is “no point in claiming that the purpose of a system is to do what it constantly fails to do”
lel we have a lot to learn from those early systems theorists / cyberneticians.
It’s a helpful assistant, not a therapist