OpenAI debated alerting police months before a fatal shooting in Tumbler Ridge, British Columbia, after a user described violent scenarios to ChatGPT, according to people familiar with the matter. The company’s automated systems flagged the posts, prompting internal discussions among roughly a dozen employees. Some staff believed the messages signaled potential real-world harm and urged notifying authorities.
OpenAI ultimately banned the account but did not contact law enforcement, saying the activity failed to meet its threshold of a credible and imminent risk. The incident highlights growing tension between user privacy and public safety as AI chatbots capture increasingly personal disclosures.
Want to know more? Check out the source code on The Wall Street Journal.











