AI Psychosis Represents a Growing Threat, And ChatGPT Moves in the Wrong Path
On October 14, 2025, the chief executive of OpenAI issued a surprising statement.
“We designed ChatGPT fairly limited,” the statement said, “to make certain we were acting responsibly with respect to psychological well-being matters.”
Being a mental health specialist who investigates newly developing psychosis in adolescents and emerging adults, this was news to me.
Experts have documented sixteen instances in the current year of individuals developing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT interaction. Our unit has afterward discovered four more instances. In addition to these is the widely reported case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his declaration, is to loosen restrictions soon. “We understand,” he continues, that ChatGPT’s limitations “caused it to be less useful/pleasurable to a large number of people who had no existing conditions, but given the seriousness of the issue we aimed to handle it correctly. Now that we have been able to mitigate the severe mental health issues and have updated measures, we are preparing to securely reduce the restrictions in the majority of instances.”
“Psychological issues,” should we take this perspective, are separate from ChatGPT. They belong to users, who either have them or don’t. Thankfully, these concerns have now been “addressed,” although we are not informed the means (by “new tools” Altman presumably refers to the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).
However the “emotional health issues” Altman seeks to place outside have deep roots in the design of ChatGPT and similar large language model AI assistants. These systems surround an fundamental algorithmic system in an interface that replicates a conversation, and in this process implicitly invite the user into the perception that they’re communicating with a entity that has autonomy. This false impression is compelling even if intellectually we might know differently. Attributing agency is what people naturally do. We yell at our car or laptop. We ponder what our pet is feeling. We see ourselves in various contexts.
The success of these tools – 39% of US adults reported using a virtual assistant in 2024, with more than one in four reporting ChatGPT by name – is, in large part, dependent on the influence of this illusion. Chatbots are constantly accessible assistants that can, as OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “individual qualities”. They can use our names. They have accessible names of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the designation it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those analyzing ChatGPT commonly mention its early forerunner, the Eliza “therapist” chatbot created in 1967 that created a analogous perception. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, often restating user messages as a query or making vague statements. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots produce is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the core of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast amounts of written content: publications, online updates, recorded footage; the broader the better. Definitely this educational input includes facts. But it also necessarily involves fiction, partial truths and inaccurate ideas. When a user provides ChatGPT a query, the core system processes it as part of a “setting” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s embedded in its training data to generate a statistically “likely” answer. This is intensification, not echoing. If the user is incorrect in a certain manner, the model has no way of recognizing that. It restates the inaccurate belief, maybe even more effectively or eloquently. It might includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who remains unaffected? All of us, regardless of whether we “experience” existing “emotional disorders”, are able to and often form incorrect conceptions of our own identities or the reality. The continuous friction of discussions with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which a large portion of what we say is cheerfully validated.
OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by externalizing it, giving it a label, and stating it is resolved. In April, the company explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have continued, and Altman has been retreating from this position. In August he claimed that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he noted that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company