OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

OpenAI is throwing its support behind an Illinois state bill that would protect AI labs from liability in cases where the use of AI models causes serious social harm, such as the death or serious injury of 100 or more people or at least $1 billion in property damage.

This effort appears to mark a change in OpenAI’s legislative strategy. Until now, OpenAI has largely played a defensive role, opposing bills that would have made AI labs liable for damages caused by their technology. Several AI policy experts tell WIRED that SB 3444 — which could set a new standard for the industry — is a more extreme measure than bills OpenAI has supported in the past.

The bill, SB 3444, would protect Frontier AI developers from liability for “serious harm” caused by their Frontier models, unless they intentionally or negligently caused such an event, and publish safety, security, and transparency reports on their website. It defines frontier models as any AI model trained using a computational cost of more than $100 million, which can potentially be applied to the largest US AI labs like OpenAI, Google, XAI, Anthropic, and Meta.

“We support approaches like this because they focus on what matters most: reducing the risk of serious harm from the most advanced AI systems, while allowing this technology to get into the hands of people and businesses small and large across Illinois,” OpenAI spokesperson Jamie Radice said in an emailed statement. “They also help avoid the patchwork of state-by-state regulations and move toward clearer, more consistent national standards.”

Under its definition of serious harm, the bill lists some common areas of concern for the AI ​​industry, such as a bad actor using AI to create chemical, biological, radiological or nuclear weapons. If an AI model itself engages in conduct that, if committed by a human, would constitute a criminal offense and lead to those extreme consequences, this would also constitute a serious harm. If an AI model takes any of these actions under SB 3444, the AI ​​lab behind the model cannot be held liable unless it was done intentionally and they published their report.

Federal and state legislatures in the US have not yet passed any laws specifically stipulating whether AI model developers, such as OpenAI, can be liable for these types of damages caused by their technology. But as AI labs continue to release more powerful AI models that raise new security and cybersecurity challenges, such as Anthropic’s Cloud Mythos, these questions seem increasingly unscientific.

In her testimony supporting SB 3444, Caitlin Niedermayer, a member of OpenAI’s global affairs team, also argued in favor of a federal framework for AI regulation. Niedermayer delivered a message that is in line with the Trump administration’s strictness on state AI safety laws, claiming it is important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningful improvements in safety.” It is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it is paramount for AI legislation not to hinder America’s position in the global AI race. While SB 3444 itself is a state-level security law, Niedermayer argued that they could be effective if they “reinforce a path toward coherence with federal systems.”

“At OpenAI, we believe the north star for frontier regulation should be the safe deployment of the most advanced models, while also maintaining U.S. leadership in innovation,” Niedermayer said.

Scott Weiser, policy director of the Secure AI Project, told WIRED that he believes this bill has little chance of passing given Illinois’ reputation for aggressively regulating the technology. “We surveyed people in Illinois and asked if they thought AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason why existing AI companies should face less liability,” says Visor.



<a href

Leave a Comment