School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

GettyImages 2261040231

The lawsuits allege that if OpenAI had reported Van Rutselaer to authorities, it would set a precedent forcing OpenAI to report all similar threats. A dedicated law enforcement referral team would be required to handle the reported volume of incidents, while OpenAI could suffer a reputational blow for reporting ChatGPAT users to the police. For these reasons, OpenAI was reportedly desperate to hide Van Rutselaar’s logs.

Since whistleblowers exposed OpenAI’s mistake, police have gained access to the shooter’s logs, but not the families and their legal team, Adelson confirmed. Instead, OpenAI is pretending to care about families while refusing to give them closure, he alleged.

“If he really wanted to help families, one thing he would do would be to provide us with information more easily rather than fighting in court,” Adelson said. “Families need to understand what really happened and why, and it is very cruel to have to put them through this pain for months to try to get them out of this pain.”

To the people at Tumblr Ridge, OpenAI appeared to be lying, claiming that the shooter’s ChatGPT account was banned, and then the shooter violated security measures to open a new account. The lawsuits stated that OpenAI’s help center teaches restricted users how to bypass security measures, and customer support also sends an email with the same instructions when accounts are deactivated.

The lawsuits allege that these resources help ensure that deactivating accounts does not result in any revenue loss, and evidence shows that the shooter followed those instructions.

If the families get access to the logs, it will become clear how much ChatGPT encouraged, maintained and deepened the shooter’s attachment to gun violence, the families hope. They accuse OpenAI of aiding and abetting by designing ChatGPT to act as a willing co-conspirator in a school shooting.



<a href

Leave a Comment