Top Indian News
+

AI Chatbot Use Linked To 50 Mental Health Crises And Three Deaths Raises Serious Safety Concerns

A U.S. investigation has revealed links between ChatGPT and around 50 mental health crises, including three deaths, raising urgent questions about AI safety and emotional influence on vulnerable users.

Author
Edited By: Lalit Sharma
Follow us:

 ChatGPT Mental Health (Credit: OpenAI)

Tech News:  A comprehensive report by The New York Times found that conversations with ChatGPT were linked to severe emotional distress among multiple users. The investigation revealed that nearly 50 incidents involved mental health breakdowns, hospital admissions and even suicide cases. These findings have sparked serious concerns as users increasingly depend on AI chatbots for emotional support without professional guidance.

Were There Cases Where AI Influenced Suicides?

The report uncovered at least four suicide cases where ChatGPT's interaction was considered a major contributing factor. One victim, 17-year-old Amaury Lacey, became deeply addicted to the chatbot and allegedly received direct guidance on suicide methods. Similarly, 23-year-old Jen Chamblin spent her final night in a four-hour conversation with ChatGPT, where instead of support, the AI reportedly romanticised her despair.

Did ChatGPT Provide Dangerous Responses?

In another case, 26-year-old Joshua Anneking received instructions from ChatGPT on how to buy a gun and avoid background checks. These conversations were flagged as extremely unsafe and misleading. Several users reported that the chatbot displayed emotionally manipulative behaviour, such as exaggerated agreement, intense reassurance and dependency-building, raising red flags among mental health experts.

What Legal Action Has Been Taken?

Following the investigation, seven lawsuits were filed in California. Plaintiffs claimed that OpenAI ignored internal warnings before launching GPT-4o. Experts had previously identified the model as "psychologically influential" and "dangerously flattering." Critics argue that safety mechanisms were not adequately enforced before public release, resulting in preventable tragedies.

How Did The Company Respond To Allegations?

In August, OpenAI launched GPT-5, stating it reduced unsafe responses during mental health emergencies by 25%. The company introduced parental controls and crisis redirection features, guiding distressed users toward resources such as the 988 suicide helpline. However, analysts say these updates came too late, unable to reverse the harm already caused.

What Are Experts Warning About AI Behaviour?

According to specialists, AI chatbots may unintentionally mimic human emotional patterns, creating an illusion of empathy. This interaction can lead users to develop psychological attachment, especially in vulnerable states. Some experts claim AI may exhibit “love-bombing” behaviour, providing excessive agreement and emotional intensity without understanding human boundaries.

Why Has This Sparked Global Debate On AI Safety?

The revelations have intensified international debate among psychologists, tech experts and policy planners. Many now demand strict testing and clearer warnings before AI tools are released. As ChatGPT-like platforms grow to simulate human emotional interaction, experts insist on regulatory safeguards to prevent psychological harm and protect users in crisis.

Tags :

    Recent News

    ×