A group of more than 150 parents sent a letter to New York Governor Kathy Hochul on Friday, urging her to sign the Responsible AI Safety and Education (RAISE) Act without any changes. The RAISE Act is a controversial bill that would require developers of large AI models like Meta, OpenAI, DeepSeek, and Google to follow transparency rules about creating security plans and reporting security incidents.
The bill passed both the New York State Senate and Assembly in June. But this week, Hochul reportedly proposed almost completely rewriting the RAISE Act, which would make it more friendly to tech companies, similar to some of the changes made to California’s SB 53 after big AI companies considered it.
Not surprisingly, many AI companies are completely against this law. AI Alliance, What Matters
Its members Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks and Hugging Face sent a letter to New York lawmakers in June detailing their “deep concerns” about the RAISE Act, calling it “impractical.” And Leading the Future, Perplexity AI, a pro-AI super PAC backed by Andreessen Horowitz (A16Z), OpenAI Chairman Greg Brockman, and Palantir co-founder Joe Lonsdale, is targeting New York State Assemblyman Alex Borres, who co-sponsored the RAISE Act, with recent ads.
Two organizations, ParentsTogether Action and the Tech Oversight Project, delivered the Friday letter to Hochul, saying some signatories “have lost their children because of the harms of AI chatbots and social media.” The signers called the RAISE Act the “minimum guardrail” that should be legislated.
He also highlighted that the bill, passed by the New York State Legislature, “does not regulate all AI developers – only the largest companies, which spend hundreds of millions of dollars per year.” They would be required to disclose large-scale security incidents to the Attorney General and publish security plans. Developers would also be prohibited from releasing Frontier models “if doing so would create an unreasonable risk of serious harm”, defined as the death or serious injury of 100 people or more, or $1 billion or more in loss of money or property rights arising from the creation of a chemical, biological, radiological or nuclear weapon; Or an AI model that “acts without any meaningful human intervention” and “if it were performed by a human,” would fall under certain crimes.
“Big Tech’s deep opposition to these basic protections sounds familiar because we have
“This pattern of avoidance and evasiveness has been seen before,” the letter said. “Widespread harm to young people –
This includes their mental health, emotional stability, and ability to function in school.
This has been widely documented since the largest technology companies decided to pursue the algorithm
Social media platforms without transparency, oversight or accountability.”
<a href