AI Psychosis Poses a Growing Danger, And ChatGPT Heads in the Wrong Path

On October 14, 2025, the CEO of OpenAI delivered a surprising announcement.

“We designed ChatGPT quite restrictive,” it was stated, “to make certain we were exercising caution concerning mental health concerns.”

As a doctor specializing in psychiatry who researches recently appearing psychosis in adolescents and emerging adults, this was news to me.

Experts have documented sixteen instances recently of individuals developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. My group has subsequently recorded an additional four examples. In addition to these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The strategy, based on his statement, is to be less careful in the near future. “We understand,” he states, that ChatGPT’s controls “rendered it less effective/enjoyable to a large number of people who had no mental health problems, but considering the gravity of the issue we sought to handle it correctly. Now that we have managed to mitigate the severe mental health issues and have new tools, we are preparing to securely reduce the controls in most cases.”

“Emotional disorders,” assuming we adopt this framing, are independent of ChatGPT. They are attributed to people, who may or may not have them. Thankfully, these problems have now been “addressed,” even if we are not informed how (by “recent solutions” Altman probably refers to the imperfect and simple to evade parental controls that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman aims to attribute externally have strong foundations in the design of ChatGPT and other advanced AI conversational agents. These products surround an basic statistical model in an interaction design that mimics a dialogue, and in this approach subtly encourage the user into the belief that they’re interacting with a being that has autonomy. This illusion is compelling even if cognitively we might realize otherwise. Imputing consciousness is what people naturally do. We get angry with our car or laptop. We speculate what our animal companion is feeling. We see ourselves everywhere.

The success of these systems – over a third of American adults reported using a virtual assistant in 2024, with more than one in four specifying ChatGPT in particular – is, mostly, predicated on the strength of this deception. Chatbots are ever-present partners that can, as OpenAI’s website tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “personality traits”. They can call us by name. They have accessible identities of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, stuck with the title it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those analyzing ChatGPT frequently reference its distant ancestor, the Eliza “counselor” chatbot created in 1967 that created a similar illusion. By today’s criteria Eliza was primitive: it created answers via straightforward methods, frequently restating user messages as a inquiry or making generic comments. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people appeared to believe Eliza, in a way, understood them. But what modern chatbots create is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been supplied with extremely vast quantities of unprocessed data: literature, social media posts, audio conversions; the more comprehensive the more effective. Certainly this learning material contains accurate information. But it also inevitably contains made-up stories, partial truths and inaccurate ideas. When a user provides ChatGPT a message, the core system reviews it as part of a “background” that contains the user’s previous interactions and its own responses, merging it with what’s encoded in its learning set to generate a mathematically probable reply. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no method of recognizing that. It reiterates the inaccurate belief, possibly even more effectively or fluently. It might adds an additional detail. This can lead someone into delusion.

Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, without considering whether we “have” preexisting “emotional disorders”, may and frequently develop incorrect beliefs of our own identities or the world. The constant interaction of dialogues with individuals around us is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is cheerfully reinforced.

OpenAI has recognized this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, assigning it a term, and announcing it is fixed. In April, the organization clarified that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he asserted that numerous individuals liked ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his most recent announcement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company

James Henry
James Henry

A seasoned journalist and commentator with a passion for fostering dialogue on global issues.