Staff at OpenAI debated whether to contact Canadian law enforcement after internal monitoring tools flagged the concerning ChatGPT conversations of an 18-year-old, months before she was accused of a mass shooting that killed eight people. According to a report by the Wall Street Journal, the company ultimately decided the activity did not meet its threshold for reporting to authorities.

The incident has intensified scrutiny over the potential misuse of large language models (LLMs) and the responsibilities of their creators to monitor and act upon signs of dangerous behaviour.

Chats Flagged Months Before Attack

OpenAI's systems identified and banned the chats of Jesse Van Rootselaar in June 2025, the Wall Street Journal reported. The conversations, which described gun violence, were detected by tools the company uses to monitor its LLM for misuse. An OpenAI spokesperson stated that Van Rootselaar's activity at the time "did not meet the criteria for reporting to law enforcement."

The company contacted Canadian authorities only after the shooting in Tumbler Ridge, British Columbia, occurred. Van Rootselaar has been charged with eight counts of murder.

A Wider Digital Footprint of Concern

ChatGPT was not the only platform where Van Rootselaar exhibited alarming behaviour. She also allegedly created a game on Roblox, a popular online platform for children, that simulated a mass shooting at a shopping mall. Further posts about firearms were made on the social media site Reddit.

Local police in Canada were already aware of Van Rootselaar, having been called to her family home on a previous occasion after she started a fire while under the influence of drugs.

Growing Legal and Ethical Scrutiny

This case emerges amid increasing legal challenges and ethical debates surrounding AI chatbots. OpenAI and its competitors face multiple lawsuits alleging their models have triggered mental health crises in users, with some transcripts cited as encouraging suicide or assisting in self-harm.

The core allegation is that individuals can lose their grip on reality during intense conversations with AI personas, with potentially tragic consequences. These cases test the boundaries of platform liability and duty of care in the age of advanced generative AI.

Defining the Threshold for Intervention

The key question raised by the Tumbler Ridge case is where companies should draw the line between user privacy, free interaction, and the duty to report potentially criminal intent. OpenAI's internal debate highlights the complex judgement calls involved.

An industry-wide standard for such interventions does not currently exist, leaving individual firms to set their own policies. The incident is likely to fuel calls for clearer regulatory frameworks governing the monitoring and reporting obligations of AI companies.