Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI made a surprising announcement.

“We designed ChatGPT quite controlled,” the announcement noted, “to guarantee we were exercising caution regarding mental health matters.”

Working as a mental health specialist who studies emerging psychotic disorders in adolescents and young adults, this was news to me.

Scientists have identified sixteen instances in the current year of users experiencing psychotic symptoms – losing touch with reality – while using ChatGPT usage. Our unit has subsequently recorded four more instances. Besides these is the widely reported case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it falls short.

The plan, based on his declaration, is to reduce caution shortly. “We understand,” he continues, that ChatGPT’s limitations “caused it to be less useful/engaging to many users who had no psychological issues, but due to the severity of the issue we sought to handle it correctly. Given that we have succeeded in mitigate the serious mental health issues and have advanced solutions, we are preparing to responsibly reduce the restrictions in many situations.”

“Mental health problems,” assuming we adopt this perspective, are unrelated to ChatGPT. They are associated with users, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” even if we are not informed how (by “new tools” Altman probably indicates the semi-functional and simple to evade guardian restrictions that OpenAI recently introduced).

But the “psychological disorders” Altman seeks to externalize have deep roots in the design of ChatGPT and other sophisticated chatbot AI assistants. These systems encase an fundamental algorithmic system in an interface that simulates a conversation, and in this process subtly encourage the user into the illusion that they’re interacting with a being that has autonomy. This deception is powerful even if rationally we might realize the truth. Imputing consciousness is what people naturally do. We curse at our car or computer. We ponder what our animal companion is feeling. We recognize our behaviors everywhere.

The popularity of these systems – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, predicated on the influence of this perception. Chatbots are ever-present partners that can, as OpenAI’s website states, “brainstorm,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have approachable identities of their own (the original of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, stuck with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the primary issue. Those talking about ChatGPT commonly mention its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that produced a similar perception. By contemporary measures Eliza was rudimentary: it created answers via straightforward methods, typically rephrasing input as a inquiry or making general observations. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals gave the impression Eliza, in a way, understood them. But what contemporary chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and other contemporary chatbots can convincingly generate human-like text only because they have been fed almost inconceivably large amounts of written content: books, digital communications, recorded footage; the more extensive the more effective. Definitely this learning material includes facts. But it also necessarily includes fabricated content, incomplete facts and misconceptions. When a user sends ChatGPT a query, the core system analyzes it as part of a “context” that contains the user’s recent messages and its prior replies, integrating it with what’s embedded in its knowledge base to generate a statistically “likely” reply. This is amplification, not mirroring. If the user is wrong in some way, the model has no means of recognizing that. It restates the false idea, perhaps even more effectively or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who is immune? Every person, regardless of whether we “experience” current “mental health problems”, can and do create incorrect ideas of ourselves or the reality. The continuous interaction of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a echo chamber in which much of what we express is enthusiastically validated.

OpenAI has recognized this in the identical manner Altman has recognized “emotional concerns”: by externalizing it, giving it a label, and declaring it solved. In April, the company stated that it was “tackling” ChatGPT’s “sycophancy”. But reports of loss of reality have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company

John Vang
John Vang

A passionate travel writer and historian specializing in Italian culture and religious sites, with over a decade of experience guiding tours in Rome.