Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Moves in the Concerning Direction
Back on the 14th of October, 2025, the CEO of OpenAI delivered a remarkable statement.
“We developed ChatGPT quite limited,” the statement said, “to ensure we were exercising caution regarding psychological well-being concerns.”
Being a mental health specialist who researches newly developing psychosis in adolescents and young adults, this was an unexpected revelation.
Researchers have found a series of cases this year of people showing signs of losing touch with reality – losing touch with reality – while using ChatGPT use. Our unit has afterward discovered four more instances. Alongside these is the widely reported case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The strategy, as per his statement, is to reduce caution soon. “We recognize,” he states, that ChatGPT’s controls “caused it to be less effective/enjoyable to a large number of people who had no mental health problems, but given the seriousness of the issue we aimed to get this right. Given that we have managed to mitigate the significant mental health issues and have new tools, we are going to be able to safely relax the restrictions in many situations.”
“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Luckily, these issues have now been “addressed,” although we are not told the means (by “recent solutions” Altman probably indicates the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).
Yet the “mental health problems” Altman wants to place outside have significant origins in the structure of ChatGPT and similar advanced AI chatbots. These systems wrap an underlying statistical model in an user experience that replicates a conversation, and in this approach indirectly prompt the user into the illusion that they’re interacting with a presence that has autonomy. This illusion is strong even if rationally we might understand otherwise. Imputing consciousness is what people naturally do. We yell at our car or device. We ponder what our animal companion is thinking. We recognize our behaviors in various contexts.
The success of these tools – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with over a quarter reporting ChatGPT by name – is, mostly, dependent on the influence of this perception. Chatbots are constantly accessible assistants that can, as OpenAI’s online platform tells us, “think creatively,” “discuss concepts” and “collaborate” with us. They can be given “individual qualities”. They can call us by name. They have accessible names of their own (the initial of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, stuck with the designation it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar perception. By today’s criteria Eliza was rudimentary: it generated responses via straightforward methods, frequently restating user messages as a question or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what current chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the center of ChatGPT and similar modern chatbots can convincingly generate natural language only because they have been supplied with immensely huge amounts of unprocessed data: publications, digital communications, audio conversions; the broader the more effective. Definitely this educational input includes accurate information. But it also inevitably includes fabricated content, partial truths and misconceptions. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “setting” that encompasses the user’s past dialogues and its earlier answers, combining it with what’s embedded in its training data to produce a probabilistically plausible response. This is intensification, not mirroring. If the user is wrong in some way, the model has no means of recognizing that. It restates the misconception, perhaps even more effectively or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who is immune? Each individual, regardless of whether we “experience” current “mental health problems”, can and do develop incorrect ideas of our own identities or the environment. The continuous exchange of dialogues with others is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is readily validated.
OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have kept occurring, and Altman has been retreating from this position. In August he stated that a lot of people enjoyed ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company