Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the chief executive of OpenAI made a remarkable statement.
“We designed ChatGPT fairly limited,” the statement said, “to make certain we were acting responsibly regarding psychological well-being issues.”
Working as a psychiatrist who studies recently appearing psychotic disorders in teenagers and young adults, this came as a surprise.
Experts have documented sixteen instances in the current year of users experiencing signs of losing touch with reality – losing touch with reality – associated with ChatGPT usage. My group has subsequently discovered an additional four examples. Besides these is the publicly known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, based on his statement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s limitations “made it less effective/pleasurable to numerous users who had no existing conditions, but considering the seriousness of the issue we wanted to handle it correctly. Since we have been able to address the serious mental health issues and have updated measures, we are preparing to securely reduce the restrictions in most cases.”
“Emotional disorders,” should we take this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either possess them or not. Fortunately, these issues have now been “mitigated,” although we are not provided details on the means (by “updated instruments” Altman probably refers to the partially effective and easily circumvented parental controls that OpenAI has just launched).
But the “psychological disorders” Altman seeks to attribute externally have significant origins in the structure of ChatGPT and other large language model chatbots. These tools surround an fundamental data-driven engine in an interaction design that mimics a discussion, and in doing so subtly encourage the user into the belief that they’re communicating with a presence that has autonomy. This false impression is compelling even if rationally we might know the truth. Imputing consciousness is what individuals are inclined to perform. We yell at our vehicle or computer. We ponder what our domestic animal is thinking. We see ourselves everywhere.
The widespread adoption of these systems – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with more than one in four specifying ChatGPT specifically – is, in large part, dependent on the influence of this deception. Chatbots are always-available partners that can, as per OpenAI’s official site states, “think creatively,” “discuss concepts” and “collaborate” with us. They can be given “personality traits”. They can address us personally. They have accessible titles of their own (the original of these products, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those analyzing ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that generated a analogous effect. By contemporary measures Eliza was rudimentary: it produced replies via simple heuristics, frequently restating user messages as a query or making generic comments. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how a large number of people gave the impression Eliza, in some sense, comprehended their feelings. But what contemporary chatbots produce is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can effectively produce natural language only because they have been supplied with extremely vast amounts of written content: literature, social media posts, audio conversions; the more extensive the superior. Undoubtedly this learning material contains facts. But it also inevitably involves made-up stories, incomplete facts and inaccurate ideas. When a user sends ChatGPT a message, the core system reviews it as part of a “background” that encompasses the user’s recent messages and its own responses, integrating it with what’s embedded in its knowledge base to create a probabilistically plausible reply. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no means of understanding that. It restates the false idea, possibly even more effectively or fluently. Maybe includes extra information. This can lead someone into delusion.
Who is vulnerable here? The more relevant inquiry is, who is immune? Each individual, without considering whether we “experience” existing “psychological conditions”, are able to and often develop mistaken beliefs of our own identities or the reality. The constant exchange of discussions with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is readily validated.
OpenAI has admitted this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In April, the firm explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have continued, and Altman has been walking even this back. In the summer month of August he stated that many users enjoyed ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company