AI Liability
Tumbler Ridge Families Sue OpenAI for $1B+ Over ChatGPT Shooting Role
Seven lawsuits filed in San Francisco allege OpenAI's safety team flagged the Tumbler Ridge shooter's plans but leadership overruled reporting to police. The cases could reshape legal accountability for AI companies whose chatbots interact with dangerous users.
Seven Lawsuits Filed in San Francisco
Seven lawsuits were filed on April 29, 2026, in U.S. federal court in San Francisco against OpenAI and CEO Sam Altman on behalf of victims of the Tumbler Ridge, British Columbia school shooting. According to Al Jazeera, six of the suits allege wrongful death claims for the five slain children and the educator killed in the attack, while one is on behalf of critically injured 12‑year‑old Maya Gebala. Lawyer Jay Edelson says he expects to file more than two dozen lawsuits in total in the coming weeks.
The lawsuits are seeking over $1 billion in damages for Gebala's case alone, with Edelson told BBC News he expects the jury to award historic amounts. The suits also seek punitive damages, recovery of legal costs, and an injunction requiring OpenAI to overhaul its safety practices, including mandatory law enforcement referral protocols.
What Happened in Tumbler Ridge
On February 10, 2026, 18‑year‑old Jesse Van Rootselaar first shot and killed her mother and 11‑year‑old half‑brother at their family home in Tumbler Ridge, British Columbia, then went to her former school — Tumbler Ridge Secondary School — and killed five students and an educational assistant. According to Al Jazeera, the students killed were Zoey Benoit, 12; Abel Mwansa Jr., 12; Ticaria "Tiki" Lampert, 12; Kylie Smith, 12; and Ezekiel Schofield, 13. Education assistant Shannda Aviugana‑Durand, a mother of two, was also killed. Twenty‑five people were injured. Van Rootselaar died by suicide during the attack.
The shooting is one of the deadliest in modern Canadian history. Maya Gebala, 12, was shot three times — in the head, neck, and cheek — and remains hospitalized with what the suits describe as a "catastrophic brain injury."
The Core Allegation: Safety Team Overruled
The most explosive claim in the lawsuits is that OpenAI's own safety team identified the threat and recommended contacting the RCMP — but executive leadership, including CEO Sam Altman, vetoed that decision. According to BBC News, a 12‑person safety team flagged the shooter's conversations in June 2025 as indicating "an imminent risk of serious harm to others" and recommended contacting the Royal Canadian Mounted Police.
The suits allege senior leadership chose not to alert police "in order to protect the valuation and reputation of the $850bn company." As The Globe and Mail reports, the lawsuits state: As The Globe and Mail reports, the lawsuits state: "They did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk." The reporting would have exposed the threat its signature product routinely poses to human life, which could complicate a coming initial public offering that could be worth a trillion dollars, per The Globe and Mail.
The Ban That Wasn't
After the flagged conversations, OpenAI deactivated the shooter's account but did not contact police. The lawsuits allege the shooter then simply created a new account using her real name (but a different email) and "continued using ChatGPT to plan the attack," per BBC News. The suits allege OpenAI's public troubleshooting guide helps deactivated users sign up again, and that the company lied about having "banned" the shooter.
OpenAI told the BBC it "refuted" the allegation, saying it "revokes access to its services from banned users, which may include disabling their account and taking steps to stop them from opening new accounts." But the ease of account recreation raises questions about the effectiveness of AI platform bans — a problem every builder deploying AI chatbots will need to address.
Legal Theories: Negligence, Aiding and Abetting, Design Defect
The lawsuits deploy multiple legal theories that, if successful, could create new liability standards for AI companies. The negligence claim is the strongest on its face: OpenAI had actual knowledge of the threat through its own safety team's assessment, and allegedly failed to take reasonable action. The Globe and Mail reports the suits also allege "design defects of its flagship ChatGPT chatbot pushed the shooter toward the violence" — a product liability theory that could make the architecture and design choices of AI systems subject to legal scrutiny.
Perhaps most novel is the aiding‑and‑abetting theory: the suits accuse OpenAI and Altman of "aiding and abetting the Tumbler Ridge mass shooting by failing to alert law enforcement," as BBC News reports. This frames inaction — failure to report — as a form of complicity. If this theory gains traction, it would mean AI companies have an affirmative duty to report dangerous user behavior, not merely a duty to avoid causing harm directly.
OpenAI's Response
An OpenAI spokesperson described the shooting as an unspeakable tragedy and said the company has a zero‑tolerance policy for using its tools to assist in committing violence. The company also claims to have strengthened safeguards, including improving how ChatGPT responds to signs of distress, connecting people with mental health resources, strengthening how it assesses and escalates potential threats of violence, and improving detection of repeat policy violators, per Al Jazeera.
CEO Sam Altman sent an open letter to the community expressing deep sorrow for not alerting law enforcement sooner, acknowledging that words could never be enough but that an apology was necessary to recognize the irreversible loss the community had suffered, per The Globe and Mail. Edelson called Altman's apology empty, characterizing it as an effort to get out in front of the coming legal challenges, per Al Jazeera. On April 28, OpenAI published a blog post outlining how it responds to users who display potentially dangerous behavior, noting it trains models to refuse requests that could meaningfully enable violence and notifies law enforcement when conversations suggest an imminent and credible risk of harm to others.
A Growing Wave of AI Liability Cases
The Tumbler Ridge lawsuits are part of a rapidly expanding legal landscape for AI companies. Florida Attorney General James Uthmeier is conducting a criminal probe into whether ChatGPT played a part in the Florida State University shooting, where the chatbot allegedly "advised the shooter on what type of gun to use, ammunition, and what time of day and where on campus the shooter could encounter a higher population," per BBC News. Uthmeier stated: "If it was a person on the other end of that screen, we would be charging them with murder."
Character.AI has faced lawsuits related to teen interactions with chatbots, including self‑harm and suicide cases. The FTC is inquiring into seven tech companies over AI "friend" chatbots and child protection. A coalition of 42 state attorneys general sent a letter to 13 AI companies about safety concerns. BC Premier David Eby is calling on Ottawa to set national standards for mandatory reporting thresholds, and Canada's federal government is studying AI regulation frameworks.
For builders, the message is unambiguous: the legal environment for AI products is changing fast. The combination of negligence claims based on internal knowledge, product liability theories about design defects, and novel aiding‑and‑abetting doctrines means that building AI chatbots without rigorous safety escalation protocols is becoming a legal liability — not just an ethical one.
Apr 30, 2026
Anthropic Weighs $900B Funding Round That Would Overtake OpenAI
Anthropic is fielding preemptive offers for a $50B round at $850B-$900B valuation, surpassing OpenAI's $852B. With $40B revenue run rate and an IPO possibly as soon as October, builders face a future of tighter usage caps and shifting pricing.
Apr 30, 2026
Ineffable Intelligence Secures Historic $1.1B Seed Funding
David Silver, former DeepMind lead, has launched Ineffable Intelligence, which just secured $1.1 billion in seed funding. Supported by tech giants like Nvidia and Google, this startup aims to develop a 'superlearner' AI exceeding human capabilities.
Apr 29, 2026
Citi Picks Anthropic Over OpenAI to Lead $4.2 Trillion AI Market by 2030
Citigroup's latest research note projects Anthropic will overtake OpenAI as the leader of a $4.2 trillion AI market by 2030. The reasoning? Enterprise trust, cloud partnerships, and less drama.
Related News
Apr 28, 2026
Amazon Court Order Blocks Perplexity AI Bots — New CFAA Precedent for AI Agents
A federal judge ordered Perplexity to stop its Comet AI shopping agent from accessing Amazon Prime accounts, establishing the first legal precedent that the CFAA applies to autonomous AI agents — even when users authorize them. The ruling forces every builder working with agentic AI to rethink how their tools access third-party platforms.
Apr 27, 2026
OpenAI Offers $25,000 for Biosafety Jailbreaks in GPT-5.5 Bug Bounty
OpenAI launched a Bio Bug Bounty for GPT-5.5, offering $25,000 to researchers who find a universal jailbreak that defeats five biosafety questions. But the invite-only program, strict NDA, and low payout have drawn fierce criticism from the security community.
Apr 26, 2026
OpenAI Knew About Shooting Suspects and Said Nothing. Now Altman Is Sorry
OpenAI flagged and banned a mass shooter's ChatGPT account eight months before the Tumbler Ridge attack but chose not to alert police. After a second shooting involving ChatGPT advice, Florida has opened a criminal investigation. The era of AI companies operating without a duty to report is ending.