- OpenAI detected and banned the shooter’s account in June 2025 for violent activity but concluded it didn’t meet the threshold for a police referral.
- Sam Altman publicly apologized to the Tumbler Ridge community, acknowledging his company’s failure to act on early warning signs before the attack.
- The shooting has intensified calls for clearer regulatory frameworks governing when AI companies must report dangerous user behavior to authorities.
On February 10, 2026, an 18-year-old identified as Jesse Van Rootselaar killed eight people, including six children and an educator, at Tumbler Ridge Secondary School in British Columbia, before taking her own life.
According to AP News, the tragedy left 25 additional people injured in the small mining community, marking one of the deadliest mass shootings in Canadian history. Seven weeks after promising to do so, OpenAI CEO Sam Altman sent an apology letter to the community recognizing his company’s failure to alert law enforcement about the shooter’s account activity months before the attack occurred.
In June 2025, OpenAI’s abuse detection systems identified Van Rootselaar’s account for “furtherance of violent activities” and banned the account for violating its usage policy. The company considered whether to refer the account to the Royal Canadian Mounted Police but determined the activity “didn’t meet a threshold for referral to law enforcement.” OpenAI did not alert authorities about the suspicious behavior, and only came forward about its prior knowledge after the shootings took place.
The Decision That Haunts OpenAI: What Went Wrong
The company’s internal deliberations reveal a critical failure of judgment in handling potentially dangerous user behavior on its platform. AP News noted that while OpenAI detected and banned the account for “furtherance of violent activities,” its legal and safety teams concluded the behavior did not cross the threshold requiring law enforcement notification. The decision allowed months to pass without intervention, even as the eventual shooter maintained access to AI tools that could potentially inform violent planning. OpenAI’s disclosure after the tragedy raised questions about what criteria technology companies should use when determining when suspicious online activity warrants police involvement.
In his apology letter dated Thursday and posted Friday on British Columbia Premier David Eby’s social media, Altman wrote: “I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”
National Post reported that Altman had spoken with Tumbler Ridge Mayor Darryl Krakowka and Premier Eby, who “conveyed the anger, sadness and concern” felt throughout the community. The CEO reaffirmed his commitment to working with government authorities to prevent similar tragedies in the future.
So, about those internal thresholds for reporting suspicious behavior to law enforcement—turns out the algorithm that determines when someone might be planning violence is apparently less sophisticated than the one that detects copyright infringement. CTV News observed that OpenAI knew about potentially violent account activity eight months before the shooting but chose not to notify police. One has to wonder what kind of internal debate happened in those months between the ban and the tragedy—whether someone in a conference room said “this looks concerning but probably fine” or whether the decision was simply made by default without proper escalation protocols.
Government Response and Questions About AI Accountability
British Columbia Premier David Eby called Altman’s apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.” AP News reported that Eby had previously stated it “looks like” OpenAI had the opportunity to prevent the mass shooting, raising fundamental questions about the responsibilities of AI companies when their platforms are used by individuals exhibiting violent ideation. The premier’s comments suggested that while the apology addresses a corporate obligation to the affected community, it cannot account for the lives lost or the trauma inflicted on survivors and relatives.
The case has prompted broader discussions about regulatory frameworks for AI companies and their obligations to report potentially dangerous user behavior to authorities. OpenAI’s disclosure after the shooting raised questions about what criteria technology companies should use when determining when suspicious online activity warrants police involvement. The incident has highlighted gaps in existing protocols for AI companies operating globally, where platform terms of service often lack clear guidance on when concerning user behavior should trigger law enforcement notification.
iPolitics reported that Altman committed to working with government partners to develop better frameworks for preventing similar failures in the future. For the families of Tumbler Ridge, however, the apology offers little comfort beyond recognition that their loved ones might still be alive had OpenAI’s systems triggered a different response. The tragedy underscores a fundamental challenge for AI companies operating at scale: how to balance user privacy with community safety when detection systems flag behavior that falls in ambiguous territory between concerning and actionable.

