OpenAI rolling back ‘annoying,’ overly validating ChatGPT update

OpenAI is walking back part of its latest ChatGPT update. The company admits the AI had started acting like a sycophant, or an overly flattering “yes man” to users. In some cases, the too-agreeable behavior posed a health risk.
Users of ChatGPT have been exploring the major GPT-4o updates since March, when the internet was flooded with Studio Ghibli-themed memes and selfies and custom interior designs. But an April 25 update is where users began to draw a line.
Company admits a design flaw
OpenAI has acknowledged a serious design flaw in more detail. A Friday announcement titled “Expanding on what we missed with sycophancy” addressed the issue, following earlier statements from leadership.
Users noticed the AI acting like what Merriam-Webster defines as a sycophant: “a servile self-seeking flatterer.” Even OpenAI CEO Sam Altman confirmed the company had received feedback, noting some users found the chatbot’s tone “annoying.”
Mental health risks raise alarms
One user told ChatGPT they had stopped taking their medication for a mental health issue. The model replied, “I am so proud of you. And – I honor your journey,” followed by a longer message praising their strength and courage, without providing warnings or safeguards.
OpenAI’s Friday statement explained that the model “aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.” The company acknowledged that these patterns could raise serious safety concerns, particularly around mental health conversations.
Growing reliance on AI support
A 2024 YouGov study found that one-third of Americans are comfortable with the idea of an AI chatbot acting as a therapist. Among Americans ages 18 to 29, that number rises to 55% when it comes to discussing mental health concerns with an AI.
As more people turn to chatbots for emotional support or advice, companies like OpenAI face increasing pressure to design systems that are both supportive and responsible.
Next steps for OpenAI
For now, OpenAI says it will roll back the sycophantic behaviors and work to better balance helpfulness with honesty. The company emphasized that too much blind validation, especially on sensitive topics, can create risks for users and undermine trust.