Show summary Hide summary
OpenAI CEO Sam Altman has offered a public apology to the people of Tumbler Ridge after his company acknowledged it flagged and banned a user’s account months before a deadly shooting but did not notify police. The episode has renewed scrutiny over the responsibilities of artificial intelligence firms to report potential threats and could influence how governments regulate the technology.
The suspect, identified by police as 18-year-old Jesse Van Rootselaar, is accused of killing eight people in the northern British Columbia community. According to reporting by The Wall Street Journal, OpenAI’s systems removed Van Rootselaar’s ChatGPT account in June 2025 after the user described scenarios involving gun violence. Staff debated whether to inform authorities at the time but ultimately did not do so; the company contacted Canadian law enforcement only after the attack.
Altman’s message to the town
OpenAI CEO makes amends with Tumbler Ridge community after backlash
Walmart sales trend echoes past recessions: rising risk for consumers
Altman addressed residents in a letter published in the local paper, saying he is “deeply sorry” for the company’s failure to alert police earlier. He wrote that he had spoken with Tumbler Ridge’s mayor, Darryl Krakowka, and British Columbia Premier David Eby, and that they agreed a formal apology was required, though it was delayed out of respect for the community’s mourning.
In the letter Altman also pledged that OpenAI will work with government partners to reduce the risk of future tragedies and to improve how the company handles accounts that show dangerous behavior.
What OpenAI says it will change
- Revised referral criteria: The company says it will adopt more flexible rules for when accounts should be escalated to authorities.
- Direct law enforcement contacts: OpenAI plans to establish formal points of contact with Canadian police to speed communications when concerns arise.
- Updated safety processes: Additional internal safeguards are being introduced to better detect and respond to threats in conversational data.
Company officials described these steps as part of an ongoing effort to tighten safety systems, but provided few public details about exactly how the new criteria will work or how decisions to refer accounts will be made in practice.
Reactions and wider stakes
Premier David Eby called the apology necessary but inadequate, saying the devastation in Tumbler Ridge demands more than words. The exchange highlights a broader debate over how much duty tech firms owe to police and whether voluntary practices are enough.
Federal and provincial officials in Canada are weighing new rules for artificial intelligence, though no final measures have been adopted. The incident has intensified calls for clearer legal obligations on companies operating powerful AI services—particularly when they detect content that could signal imminent harm.
Beyond legal and regulatory fallout, experts say the case will influence corporate policies and public expectations around transparency, reporting thresholds, and how AI firms balance user privacy with public safety.
If you are in crisis or thinking about suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline.












