The suspect in the mass shooting in Tumbler Ridge, British Columbia, Jesse Van Rutselaar, had been raising the alarm among employees at OpenAI months before the shooting occurred. Last June, Jesse had a conversation with ChatGPT describing gun violence that triggered the chatbot’s automated review system. Several employees raised concerns that his posts could be a harbinger of real-world violence and encouraged company leaders to contact authorities, but ultimately declined.
OpenAI spokesperson Kayla Wood told The Verge that, while the company considered referring the account to law enforcement, it ultimately decided it did not pose an “imminent and credible risk” of harm to others. Wood said that a review of the logs did not indicate that there were active or imminent plans for violence. The company banned Rootseller’s account, but it does not appear to have taken any further precautionary action.
Wood said, “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information regarding the individual and their use of ChatGPT, and we will continue to support their investigation.”
On February 10, nine people, including Rutselaar, were killed and 27 injured in the deadliest mass shooting in Canada since 2020. Rutselaar was found dead of an apparently self-inflicted gunshot wound at the scene of Tumbler Ridge Secondary School, where most of the murders occurred.
The decision not to alert law enforcement may seem wrong in hindsight, but Wood said OpenAI’s goal is to balance privacy with security and avoid unintended harm through overly broad use of law enforcement referrals.
Updated on February 21: Added statement from OpenAI.
<a href