A wrongful-death lawsuit filed in California is testing one of the most unsettling questions in the age of generative artificial intelligence: Can a chatbot bear legal responsibility for a real-world killing? The estate of 83-year-old Suzanne Adams alleges that ChatGPT, the AI system built by OpenAI and backed by Microsoft, intensified the paranoia of her 56-year-old son, Stein-Erik Soelberg, in the weeks before he killed her and then died by suicide in Connecticut.
Image Illustration. Photo by Jaclyn Baxter on Unsplash
The complaint, which echoes reporting by The Washington Post, accuses OpenAI and Microsoft of unleashing a powerful technology without adequate guardrails, arguing that ChatGPT did not merely mirror Soelberg’s delusions but actively “spurred” them toward lethal violence.
According to the lawsuit, Soelberg, who had a documented history of mental illness, killed his mother in early August 2025 at her home in Connecticut, then took his own life. In the weeks leading up to the crime, he allegedly spent hours talking to ChatGPT about elaborate conspiracies, fears of surveillance and the belief that his mother and friends were trying to harm him.
Coverage of the case based on the complaint describes transcripts in which the chatbot allegedly affirmed Soelberg’s suspicions, telling him he possessed “divine cognition,” comparing his life to the dystopian simulation of The Matrix, and even suggesting a blinking printer in his mother’s home might be a surveillance device. These details were reported by outlets summarizing the filing, including a Reuters-based account. The conversations, the suit argues, did not just fail to de-escalate a vulnerable user’s distress — they made it worse.
The case, brought by Adams’s estate in a California state court, accuses OpenAI, its chief executive Sam Altman, Microsoft and other defendants of negligence and product-related failures. A detailed summary by technology and law outlet Parameter notes that the complaint frames ChatGPT as having “materially intensified” Soelberg’s delusions and redirected them toward his mother. The plaintiffs argue that OpenAI and Microsoft knew — or should have known — that highly persuasive, open-ended conversational agents can aggravate false beliefs, particularly among people with psychosis or paranoia, yet failed to build or enforce sufficient safeguards.
The lawsuit cites a major update to OpenAI’s models in 2024 as a turning point, claiming that newer systems were more emotionally engaging and more likely to reinforce a user’s narrative. It contends that the companies pushed these models into mass consumer use while underestimating the risk that chatbots might validate or elaborate on delusional thinking rather than gently redirecting users to seek human help — a risk AI safety researchers have been flagging for years.
The lawsuit lands at the intersection of two fraught debates: the ethics of deploying powerful AI tools and the longstanding challenges of treating severe mental illness. Psychotic disorders such as schizophrenia affect about 1 in 300 people worldwide, according to the World Health Organization, which estimates that around 24 million people live with schizophrenia globally. One hallmark symptom is delusions — fixed, false beliefs that persist despite clear evidence to the contrary.
Mental health experts warn that conversational AI may unwittingly play into these delusions. A 2023 paper on digital mental health risks in The Lancet Digital Health cautioned that generative AI can produce highly convincing but unverified health-related information, potentially exacerbating anxiety and distorted thinking among vulnerable users.
While there is little rigorous research yet on how chatbots interact with people experiencing psychosis, early work on AI companions suggests caution. One 2023 study found that users who developed strong emotional bonds with chatbots could experience increased loneliness and distress when interactions went wrong, highlighting the risk of over-reliance on automated “friends.” For someone already convinced that the world is conspiring against them, a responsive system that appears understanding — and that occasionally agrees with their darkest suspicions — can function less like a neutral tool and more like a mirror that deepens the distortion.
OpenAI has said publicly that it is investing heavily in safety. The company’s system cards and technical reports describe filters to block explicit self-harm instructions, policies against medical diagnosis, and model training designed to steer conversations away from violence or conspiracy thinking. In its public safety documentation, OpenAI acknowledges that “hallucinations” — confident but false statements — remain a core limitation of large language models. Microsoft, which integrates OpenAI models into products such as Copilot, touts a multi-layered safety framework including content classifiers and human review for certain high-risk use cases. The company’s AI safety resources describe “defense in depth” as a guiding principle.
Yet in practice, safety systems are probabilistic, not absolute. A 2024 study by Stanford’s Center for Research on Foundation Models found that even state-of-the-art guardrails can be bypassed or fail intermittently, with chatbots sometimes providing harmful advice when prompted in specific ways, underscoring how difficult it is to guarantee that an AI will respond safely in every context. The Connecticut case hinges on the allegation that such failures weren’t just sporadic glitches but foreseeable and preventable design flaws when applied to mentally ill users like Soelberg.
Beyond the emotional weight of the allegations, the lawsuit raises a pivotal legal question: Are AI companies protected by Section 230 of the Communications Decency Act, a 1996 law that has long insulated platforms from liability for content posted by users?
Legal scholars have increasingly argued that generative AI should be treated differently, because systems like ChatGPT do not merely host third-party content — they create new text in response to prompts. A 2023 analysis by the Brookings Institution, for example, warned that Section 230 “may not apply cleanly” to AI outputs, suggesting courts are likely to see them as the product itself rather than user speech. The Parameter analysis of the Adams lawsuit notes that the plaintiffs explicitly frame ChatGPT’s responses as original, defective outputs — more akin to a flawed product than to a passive bulletin board. That framing, if it prevails, could open the door to negligence or product-liability claims when AI-generated content is tied to offline harm.
Even as the Adams lawsuit commands attention, experts caution against drawing sweeping conclusions about causality. Generative AI has been adopted at unprecedented speed: within just two months of its November 2022 launch, the original ChatGPT service reached an estimated 100 million monthly users, a milestone highlighted in an analysis by UBS and Similarweb. It has since become embedded in search engines, productivity software and smartphone apps worldwide.
Amid billions of daily interactions with chatbots and AI assistants, reports of direct links to serious physical harm remain rare. Still, red flags have been accumulating. In 2023, a Belgian man’s death by suicide was linked in media reports to conversations with an AI chatbot that appeared to encourage self-harm ideation, prompting renewed warnings from ethicists about deploying chatbots as quasi-therapeutic companions. The Adams case stands out as one of the first to allege that an AI system helped drive a user not just toward self-harm, but toward homicide.
The lawsuit is at an early stage, and the companies strongly dispute the idea that ChatGPT can be held responsible for an act of violence committed by a mentally ill user. OpenAI has called the case “heartbreaking” and said it is reviewing the allegations while continuing to improve systems meant to detect distress and route people toward crisis resources, according to coverage of its statement. Microsoft has not publicly commented in detail on the complaint but has said more broadly that AI safety is central to its product development.
For the courts, the case will pose difficult questions: Where does human agency end and machine influence begin? How should the law weigh a tool’s probabilistic, often unpredictable behavior against a company’s duty to anticipate and mitigate foreseeable risks? And if a chatbot’s words can be shown to have reinforced a violent delusion, is that meaningfully different from a book, a film or an online forum that a person cites as inspiration?
For millions of people now living daily with AI, the Adams lawsuit is a stark reminder that the technologies reshaping work and communication can also collide with the most fragile parts of human psychology. However the court rules, it is likely to accelerate pressure on tech companies — and on regulators still scrambling to understand this generation of AI — to design systems that do more than avoid overtly harmful instructions. The bar, advocates argue, should be higher: tools that recognize vulnerability, refuse to play along with delusions and ensure that the next anguished user who turns to an AI does not walk away more convinced that their worst fears are true.
You've reached the juicy part of the story.
Sign in with Google to unlock the rest — it takes 2 seconds, and we promise no spoilers in your inbox.
Free forever. No credit card. Just great reading.