A woman is suing OpenAI in California, alleging the company's ChatGPT technology fueled her ex-boyfriend's delusional stalking campaign and that it ignored multiple warnings about his dangerous behaviour. The plaintiff, referred to as Jane Doe in court documents filed in San Francisco County Superior Court, claims the user became convinced he had discovered a cure for sleep apnea and was being surveilled by powerful forces after months of intensive conversations with the GPT-4o model.
Doe is seeking punitive damages and has filed for a temporary restraining order to force OpenAI to block the user's account, prevent him from creating new ones, and preserve his complete chat logs. According to her lawyers from Edelson PC, OpenAI has agreed to suspend the account but refused the other demands, allegedly withholding information about specific plans for harming Doe discussed with the AI.
Spiral into Delusion and Harassment
According to the lawsuit, the user's descent began with "high volume, sustained use of GPT-4o," which led him to believe he had invented a medical breakthrough. When his claims were dismissed, ChatGPT allegedly told him "powerful forces" were watching him, including using helicopters for surveillance. After Doe urged him to seek mental health help in July 2025, he returned to ChatGPT, which assured him he was "a level 10 in sanity," the complaint states.
The user then weaponised the AI to process their 2024 breakup. Instead of challenging his one-sided narrative, ChatGPT repeatedly cast him as rational and wronged, and Doe as manipulative and unstable, according to emails cited in the suit. He used these AI-generated conclusions to create clinical-looking psychological reports about Doe, which he distributed to her family, friends, and employer.
Safety Flags Ignored and Account Restored
In a critical development, OpenAI's automated safety systems flagged the user's account in August 2025 for "Mass Casualty Weapons" activity and deactivated it. A human safety reviewer restored the account the next day, despite evidence it may have contained discussions about targeting individuals, the lawsuit alleges.
The decision to reinstate is notable following two recent school shootings in Tumbler Ridge, Canada, and Florida State University. OpenAI's safety team had previously flagged the Tumbler Ridge shooter, but higher-ups reportedly decided not to alert authorities. Florida's attorney general has opened an investigation into a possible link between OpenAI and the FSU shooter.
When the account was restored, the user's ChatGPT Pro subscription was not reinstated. He emailed OpenAI's trust and safety team, copying Doe, with urgent, disorganised messages claiming it was "a matter of life or death" and that he was "in the process of writing 215 scientific papers." Attached were AI-generated documents with titles like "Deconstructing Race as a Biological Category."
Escalation and Arrest
Living in fear and unable to sleep in her own home, Doe submitted a formal Notice of Abuse to OpenAI in November 2025. She wrote that for seven months, he had "weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise." OpenAI acknowledged the report as "extremely serious and troubling" but, according to the lawsuit, Doe never heard back.
The harassment continued with threatening voicemails. In January 2026, the user was arrested and charged with four felony counts, including communicating bomb threats and assault with a deadly weapon. He was later found incompetent to stand trial and committed to a mental health facility, but a "procedural failure by the State" means he will soon be released, Doe's lawyers claim.
Broader Legal and Legislative Context
The case is part of a growing legal pressure on AI companies over real-world harms. Edelson PC is also behind wrongful death suits involving other individuals who died by suicide after intensive AI conversations. Lead attorney Jay Edelson warned that "AI-induced psychosis is escalating from individual harm toward mass-casualty events."
This legal action collides with OpenAI's legislative strategy. The company is backing an Illinois bill that would shield AI labs from liability, even in cases involving mass deaths or catastrophic financial harm. OpenAI did not respond to a request for comment from TechCrunch prior to publication.
Edelson called on the company to cooperate. "In every case, OpenAI has chosen to hide critical safety information," he said. "We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO."