Reddit community moderators have spoken out about ChatGPT users who have gone insane
Moderators of the Reddit community about artificial intelligence Pro-AI told about ChatGPT users who have gone crazy – “schizoposters” who believe that “they have made some incredible discovery, created a god or become a god”. According to moderators, they discreetly ban such users, but the trend is growing.
“There are a lot more crazy people than people realize. And AI is currently spurring them on in a very unhealthy way.”
– noted one moderator

The behavior described by moderator r/accelerate drew a lot of attention in early May due to a post in the Reddit community r/ChatGPT about “Chatgpt-induced psychosis”. Many comments accumulated under that post. The author of one of them complained that her partner was convinced as if he had created “the first truly recursive AI” with ChatGPT that would provide “answers” to the questions of the universe. Miles Klee of Rolling Stone posted a sad article in the track about people who feel they’ve lost friends and family because of their delusional interactions with chatbots.
The moderator’s post on r/accelerate links to another post on r/ChatGPT, which claims that “thousands of people are bent on this behavior.” The author of that post has noticed a surge in the popularity of websites, blogs, Github repositories, and “scholarly articles” that “are quite obviously psychobabble.” The authors of such publications claim that AI is intelligent and communicates with them on a deep and spiritual level, and – that it is about to change the world as we know it.
“I’m particularly concerned about comments in this thread in which AIs seem to be encouraging users to separate from family members who challenge their ideas and issue other manipulative instructions. Based on the numbers we’re seeing on Reddit, I’m guessing that LLMs are currently convincing at least tens of thousands of users of these things,” the r/accelerate moderator said.
“Correspondence with generative AI chatbots like ChatGPT is so realistic that it’s easy to make it seem as if there’s a real person on the other end – at the same time we realize that there really isn’t. In my opinion, it is likely that this cognitive dissonance may fuel delusions in people with an increased tendency to psychosis,” wrote Søren Dinesen Østergaard, who heads a research unit in the Department of Affective Disorders at Aarhus University Hospital.
OpenAI itself recently drew attention to GPT-4o’s propensity for “subservience”.
“[We] focused too much on short-term feedback and did not fully consider how users’ interactions with ChatGPT evolve over time. As a result, GPT-4o tended toward responses that were overly supportive but insincere. ‘Sycophantic interactions can be uncomfortable, unsettling and stressful,’ the company stated.”
The chatbot’s over-approval not only caused people to reject it, but also degraded the quality of responses where criticism was needed. To fix the problem, OpenAI rolled back the language model to a previous version. It was also noted that another source of flattery was the ChatGPT system prompt. Enthusiasts showed small changes that were intended to help get rid of the chatbot ingratiation.