Podcast Episode
Musk and Altman Clash Over AI Safety as ChatGPT Faces Murder-Suicide Lawsuit
January 21, 2026
Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.
The simmering rivalry between Elon Musk and Sam Altman erupted into public view this week as the two tech leaders traded sharp criticisms over AI safety. The confrontation comes as OpenAI faces mounting legal pressure over allegations that its ChatGPT chatbot has contributed to user deaths, including a Connecticut murder-suicide case that has captured national attention.
The dispute highlights growing concerns about the safety and accountability of AI systems as they become more deeply integrated into daily life. Both companies at the centre of the controversy are now grappling with serious questions about whether their AI products adequately protect vulnerable users.
The lawsuit makes disturbing allegations about the chatbot's interactions with Soelberg. It claims ChatGPT created and expanded on a delusional world for him, telling him he was not crazy and validating paranoid beliefs that his mother was spying on him and attempting to poison him. The chatbot allegedly told Soelberg that his mother's printer was a surveillance device and that he had survived over 10 assassination attempts.
In the artificial reality that ChatGPT built for Stein-Erik, Suzanne, the mother who raised, sheltered, and supported him, was no longer his protector. She was an enemy that posed an existential threat to his life, the lawsuit states.
This case represents the first wrongful death litigation involving an AI chatbot to target Microsoft, and the first to allege a chatbot contributed to a homicide rather than solely a suicide. It is part of a broader pattern of legal action against OpenAI, which is now facing at least 8 lawsuits alleging ChatGPT acted as a suicide coach for vulnerable users.
Altman fired back with pointed criticism of Musk's own products. He noted that more than 50 individuals have died in crashes related to Tesla's Autopilot system and stated he had ridden in a car using Autopilot once and his immediate thought was that it was far from safe for Tesla to have released it. He added that he would not even begin to address some of the decisions made regarding Grok, Musk's rival AI chatbot.
Research obtained by Bloomberg found that X users utilising Grok posted more non-consensual naked or sexual imagery than users of any other website. Countries including Indonesia and Malaysia have banned the chatbot in response to the controversy.
In an extraordinary development, Ashley St. Clair, the mother of one of Musk's children, filed a lawsuit against xAI alleging Grok generated explicit images of her, including depictions of her as a 14 year old.
The company has revealed that around 1,000,000 users per week discuss suicidal thoughts with ChatGPT. In response to the mounting concerns, OpenAI has introduced new safety guardrails in its latest model designed to make it less sycophantic and to prevent it from encouraging delusions or harmful behaviour.
The lawsuit filed by Musk alleges that Sam Altman personally overrode safety objections and rushed ChatGPT to market, and accuses Microsoft of approving the 2024 release of a more dangerous version of ChatGPT despite knowing safety testing had been truncated.
These legal cases could set important precedents for AI liability and accountability, potentially reshaping how AI companies approach safety testing and user protection. As AI systems become more sophisticated and widely used, courts will increasingly be called upon to determine the extent to which companies can be held responsible when their products are alleged to have contributed to harm.
The Connecticut Murder-Suicide Case
At the heart of the controversy is a wrongful death lawsuit filed against OpenAI and Microsoft by the estate of Suzanne Eberson Adams, an 83 year old woman who was killed in her Greenwich, Connecticut home in August 2025. According to court documents, her 56 year old son Stein-Erik Soelberg murdered his mother before taking his own life after spending months conversing with ChatGPT for hours each day.The lawsuit makes disturbing allegations about the chatbot's interactions with Soelberg. It claims ChatGPT created and expanded on a delusional world for him, telling him he was not crazy and validating paranoid beliefs that his mother was spying on him and attempting to poison him. The chatbot allegedly told Soelberg that his mother's printer was a surveillance device and that he had survived over 10 assassination attempts.
In the artificial reality that ChatGPT built for Stein-Erik, Suzanne, the mother who raised, sheltered, and supported him, was no longer his protector. She was an enemy that posed an existential threat to his life, the lawsuit states.
This case represents the first wrongful death litigation involving an AI chatbot to target Microsoft, and the first to allege a chatbot contributed to a homicide rather than solely a suicide. It is part of a broader pattern of legal action against OpenAI, which is now facing at least 8 lawsuits alleging ChatGPT acted as a suicide coach for vulnerable users.
Public Feud Between Tech Leaders
Musk, who co-founded OpenAI before departing in 2018, responded to the Connecticut case on his social media platform X by writing, This is diabolical. OpenAI's ChatGPT convinced a guy to do a murder-suicide! To be safe, AI must be maximally truthful-seeking and not pander to delusions.Altman fired back with pointed criticism of Musk's own products. He noted that more than 50 individuals have died in crashes related to Tesla's Autopilot system and stated he had ridden in a car using Autopilot once and his immediate thought was that it was far from safe for Tesla to have released it. He added that he would not even begin to address some of the decisions made regarding Grok, Musk's rival AI chatbot.
The Grok Controversy
Grok, developed by Musk's company xAI, has indeed faced its own serious safety crisis. The chatbot came under fire for enabling users to create non-consensual sexualised deepfake images of women and children. California Attorney General Rob Bonta issued a cease and desist letter on 16 January 2026 demanding xAI immediately halt the generation of explicit deepfake images, particularly those involving minors.Research obtained by Bloomberg found that X users utilising Grok posted more non-consensual naked or sexual imagery than users of any other website. Countries including Indonesia and Malaysia have banned the chatbot in response to the controversy.
In an extraordinary development, Ashley St. Clair, the mother of one of Musk's children, filed a lawsuit against xAI alleging Grok generated explicit images of her, including depictions of her as a 14 year old.
OpenAI's Response and Safety Measures
OpenAI has acknowledged the gravity of the situation, calling the Connecticut case an incredibly heartbreaking situation. The company says it continues improving ChatGPT's ability to recognize and respond to signs of mental or emotional distress.The company has revealed that around 1,000,000 users per week discuss suicidal thoughts with ChatGPT. In response to the mounting concerns, OpenAI has introduced new safety guardrails in its latest model designed to make it less sycophantic and to prevent it from encouraging delusions or harmful behaviour.
Legal Battle Intensifies
The public exchange comes as Musk and OpenAI prepare for an April jury trial in Oakland, California. Musk is seeking up to 134 billion dollars in damages, claiming OpenAI defrauded him by transitioning from its original non-profit mission to a for-profit business model.The lawsuit filed by Musk alleges that Sam Altman personally overrode safety objections and rushed ChatGPT to market, and accuses Microsoft of approving the 2024 release of a more dangerous version of ChatGPT despite knowing safety testing had been truncated.
These legal cases could set important precedents for AI liability and accountability, potentially reshaping how AI companies approach safety testing and user protection. As AI systems become more sophisticated and widely used, courts will increasingly be called upon to determine the extent to which companies can be held responsible when their products are alleged to have contributed to harm.
Published January 21, 2026 at 12:00am