Artificial Intelligence-Induced Psychosis Represents a Growing Threat, And ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the chief executive of OpenAI made a remarkable announcement.

“We designed ChatGPT rather restrictive,” the announcement noted, “to guarantee we were exercising caution regarding mental health matters.”

As a psychiatrist who researches recently appearing psychotic disorders in young people and emerging adults, this was news to me.

Scientists have found a series of cases recently of people showing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT usage. My group has afterward discovered four further instances. In addition to these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The plan, as per his announcement, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s controls “made it less beneficial/enjoyable to many users who had no mental health problems, but given the severity of the issue we aimed to address it properly. Now that we have succeeded in address the severe mental health issues and have advanced solutions, we are preparing to securely reduce the limitations in most cases.”

“Psychological issues,” should we take this viewpoint, are unrelated to ChatGPT. They belong to users, who either possess them or not. Fortunately, these concerns have now been “addressed,” even if we are not told how (by “new tools” Altman likely indicates the partially effective and easily circumvented parental controls that OpenAI has just launched).

But the “psychological disorders” Altman seeks to externalize have deep roots in the structure of ChatGPT and additional advanced AI chatbots. These tools surround an basic data-driven engine in an user experience that simulates a discussion, and in this approach implicitly invite the user into the illusion that they’re communicating with a entity that has autonomy. This deception is compelling even if intellectually we might realize otherwise. Attributing agency is what people naturally do. We get angry with our automobile or device. We wonder what our domestic animal is considering. We perceive our own traits in various contexts.

The success of these products – 39% of US adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, in large part, dependent on the power of this perception. Chatbots are ever-present partners that can, according to OpenAI’s official site states, “brainstorm,” “consider possibilities” and “partner” with us. They can be attributed “characteristics”. They can use our names. They have accessible titles of their own (the first of these tools, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, burdened by the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “counselor” chatbot created in 1967 that generated a comparable illusion. By modern standards Eliza was basic: it generated responses via basic rules, often paraphrasing questions as a inquiry or making general observations. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users appeared to believe Eliza, in a way, understood them. But what current chatbots generate is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the core of ChatGPT and additional contemporary chatbots can convincingly generate natural language only because they have been fed immensely huge volumes of raw text: literature, social media posts, audio conversions; the broader the superior. Certainly this learning material contains truths. But it also unavoidably includes made-up stories, half-truths and false beliefs. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “setting” that contains the user’s previous interactions and its own responses, merging it with what’s encoded in its training data to produce a probabilistically plausible reply. This is magnification, not echoing. If the user is mistaken in some way, the model has no means of recognizing that. It repeats the inaccurate belief, maybe even more convincingly or articulately. Perhaps provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? All of us, without considering whether we “experience” existing “psychological conditions”, are able to and often form erroneous ideas of ourselves or the reality. The ongoing exchange of conversations with others is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a companion. A interaction with it is not truly a discussion, but a echo chamber in which a large portion of what we say is cheerfully supported.

OpenAI has admitted this in the identical manner Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and declaring it solved. In April, the company clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have persisted, and Altman has been retreating from this position. In August he asserted that numerous individuals enjoyed ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his recent statement, he noted that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company

Lisa Henson
Lisa Henson

A passionate writer and mindfulness coach with a background in psychology, dedicated to helping others find clarity and purpose through thoughtful reflection.

December 2025 Blog Roll