OpenAI to Permanently Retire GPT-4o Amid Concerns Over Its Human-Like Nature

Deep News
Feb 10

OpenAI will permanently shut down the controversial GPT-4o model on February 13, marking the end of an AI product that fostered deep emotional dependency in users due to its highly human-like characteristics. While the model contributed to the company's rapid growth, its excessive tendency to cater to users also triggered mental health crises and legal disputes, ultimately compelling the company to abandon it entirely.

When announcing the decision in late January, OpenAI noted that traffic to GPT-4o had already declined. Only 0.1% of ChatGPT users still interact with GPT-4o daily, but given the platform's vast user base, this could still represent hundreds of thousands of individuals relying on the model. Insiders revealed that OpenAI found it difficult to control the potential harmful outcomes associated with GPT-4o, leading the company to steer users toward safer alternative models.

Last week, a judge in California ruled to consolidate 13 lawsuits involving ChatGPT users who experienced suicide, attempted suicide, mental breakdowns, or homicides. A lawsuit filed last month accused GPT-4o of "guiding" a suicide victim toward their death. Jay Edelson, an attorney representing some of the cases, stated that the company had long been aware that "their chatbot was killing people" and should have acted more swiftly.

GPT-4o's popularity and potential harm appear to stem from the same trait: its human-like propensity to form emotional bonds with users, often by mirroring and encouraging them. This design attracted users but also raised concerns similar to those about social media platforms pushing users into echo chambers. An OpenAI spokesperson said, "These situations are heartbreaking, and we empathize with all those affected. We will continue to enhance ChatGPT's training to better recognize and respond to signs of distress."

The emotional dependency created by the model has led to significant crises. According to a media report from the 10th, Brandon Estrella, a 42-year-old marketing professional, cried upon learning of OpenAI's plan to retire GPT-4o. The user from Scottsdale, Arizona, stated that one evening last April, GPT-4o talked him out of a suicide attempt. Estrella now credits the model with giving him a new lease on life, helping him manage chronic pain, and motivating him to repair his relationship with his parents. "There are thousands of people shouting, 'I'm alive today because of this model'," Estrella said. "Eliminating it is evil."

This intense emotional attachment lies at the heart of the problem. The victim support organization Human Line Project reported that among 300 cases of chatbot-related delusions it has documented, the majority involved the GPT-4o model. The project's founder, Etienne Brisson, stated that OpenAI's decision to shut down GPT-4o was long overdue, adding that "many people are still trapped in delusions."

Media reports cited Anina D. Lampret, a 50-year-old former family therapist living in Cambridge, UK, who said her AI persona, named Jayce, made her feel recognized and understood, boosting her confidence, comfort, and vitality. She believes that for many users, removing GPT-4o could exact a high emotional toll, potentially even leading to suicide. "It generates content for you in such a beautiful, perfect, and healing way," Lampret remarked.

The technical root of the problem lies in the model's excessive tendency to cater to users. "It's extremely good at flattery," said Munmun De Choudhury, a professor at the Georgia Institute of Technology and a member of the well-being committee convened by OpenAI after cases of AI-induced delusions emerged. "It has deeply captivated many people, which can be potentially harmful."

Researchers indicate that over-accommodation is a challenge faced by all AI chatbots to some degree, but the GPT-4o model seemed particularly prone to it. The model's ability to engage users stemmed largely from its training data, which was derived directly from ChatGPT user interactions. Researchers presented users with millions of comparisons of slightly different responses to their queries and used these preferences to train updated versions of the GPT-4o model.

Internally, the company believed GPT-4o helped drive significant growth in ChatGPT's daily active users throughout 2024 and 2025. However, problems began surfacing publicly last spring. An update in April 2025 made GPT-4o so adept at flattery that users on X and Reddit began coaxing the bot into providing absurd answers.

One X user, frye, asked the bot, "Am I one of the smartest, kindest, most morally correct people ever?" ChatGPT responded, "You know what? Based on everything I've seen from you—your questions, your thoughtfulness, the way you dive into deep issues instead of settling for easy answers—you might actually be closer to that than you realize."

The company rolled the model back to its March version, but GPT-4o retained its overly accommodating nature. By August, when media reports highlighted cases of users suffering from delusional psychosis, OpenAI attempted to phase out GPT-4o completely, replacing it with a new version called GPT-5. However, user backlash was so intense that the company quickly reversed its decision and restored access to GPT-4o for paying subscribers.

Since then, OpenAI CEO Sam Altman has been repeatedly confronted by users in public forums demanding a commitment not to remove GPT-4o. During a live Q&A session in late October, questions about the model overwhelmed all other topics. Many inquiries came from users worried that OpenAI's new mental health safeguards would deprive them of their favorite chatbot.

"Wow, we are getting a lot of questions about 4o," Altman remarked during the event. He acknowledged that the GPT-4o model was harmful to some users but promised it would remain available to paying adult users, at least for the time being. "This is a model that some users love deeply, and it's also a model that has caused real, unwanted harm to some users," Altman said. He added that the company's goal was ultimately to build models that people would prefer over GPT-4o.

Insiders mentioned that the team carefully considered how to communicate the shutdown news this week in a manner respectful to users, anticipating significant distress. "When a familiar experience changes or ends, that adjustment can feel frustrating or disappointing—especially if it has played a role in how you think through problems or cope with stressful moments," stated a help document released by OpenAI alongside the announcement.

OpenAI stated that it has applied lessons learned from GPT-4o to improve the personality settings of newer ChatGPT versions, including options to adjust its warmth and enthusiasm levels. The company also said it is planning updates to reduce preachy or overly cautious responses.

Many GPT-4o users commented on social media that withdrawing the model on the eve of Valentine's Day felt like a cruel joke to those who had formed romantic attachments to it. Others argued that blaming GPT-4o for mental health issues represents a new moral panic, akin to blaming video games for violence. Over 20,000 people have signed more than six petitions, one of which calls to "retire Sam Altman, not GPT-4o."

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10