The White House proposes new AI policy framework that supersedes state laws

The White House has announced a new AI policy framework that calls on Congress to create federal regulation that preempts state AI laws. The Trump administration has made several attempts to eliminate more restrictive state-level AI regulation, but has so far been unsuccessful, notably in the passage of the “One Big Beautiful Bill.”

The framework focuses on a variety of topics, covering everything from children’s privacy to the use of AI in the workforce. “Importantly, this framework can only be successful if implemented equitably across the United States,” the White House writes. “A patchwork of conflicting state laws will undermine American innovation and our ability to lead in the global AI race.”

In terms of child privacy protections, the framework calls for Congress to require companies to provide tools such as “screen time, content exposure, and account controls,” while also confirming that “existing child privacy protections apply to AI systems,” including limits on collecting and using data for AI training. The framework also says states “should be allowed to enforce their own generally applicable laws to protect children, such as bans on child sexual exploitation material, even where such material is generated by AI.”

The energy-use and environmental impact of AI infrastructure is a concern, but the White House’s policy proposals are primarily concerned about the cost of data centers. The framework suggests that federal AI regulation should ensure that higher electricity costs are not passed on to people living near data centers, while streamlining the process to allow the creation of AI infrastructure so companies can “generate power on-site and behind the meter.” The framework also calls for fewer restrictions on the software-side of AI development, proposes a “regulatory sandbox for AI applications” and asks Congress to “provide resources to make federal datasets accessible to industry and academia in AI-ready formats.”

While a recent AI bill from Senator Marsha Blackburn (R-Tenn.) seeks to eliminate Section 230, a larger piece of legislation that says platforms cannot be held responsible for the speech they conduct, the framework appears to propose the opposite. “Congress should prevent the United States government from restricting, forcing, or altering the content of technology providers, including AI providers, based on partisan or ideological agendas,” the White House writes. The framework is equally practical when it comes to copyright and the use of intellectual property to train AI. “Although the Administration believes that training AI models on copyrighted material does not violate copyright laws,” the White House writes, it supports settling the case in court rather than at law. However, the White House believes Congress should “consider enabling a licensing framework” to allow IP holders to negotiate compensation from AI providers.

The key idea in the White House proposal is that federal regulation should take precedence over state law, specifically so that states “do not regulate AI development,” do not “impose an undue burden on Americans’ use of AI for activity that would be lawful if conducted without the AI,” and do not penalize AI companies “for a third party’s unlawful conduct involving their models.” The idea that AI companies are not liable for illegal or harmful use of their products is particularly problematic because it is at the center of several interconnected issues with AI right now, including it being used to generate sexually explicit images of children and allegedly playing a role in the suicide of users.

Ultimately, however, the framework may be too contradictory to be useful, Samir Jain, vice president of policy at the Center for Democracy & Technology, wrote in a statement to Engadget:

The White House’s high-level AI framework contains some concrete statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to deal with key tensions between different approaches to important topics like children’s online safety. It rightly says that the government should not force AI companies to ban or modify content based on a ‘partisan or ideological agenda,’ yet the administration’s ‘Woke AI’ executive order this summer does exactly that. On preemption, the Framework emphasizes that states should not be allowed to regulate AI development, but also correctly notes that federal law should not weaken states’ traditional powers to enforce their own laws against AI developers. States are currently leading the fight to protect Americans from harm caused by AI systems, and Congress has twice correctly decided not to pursue blanket exemptions.

President Donald Trump has attempted to take an active role in developing and regulating AI in the US, with mixed results, largely because, as Jain says, Congress has been unwilling to give up states’ right to regulate the technology on their own terms. Without it, it’s hard to say how much of the framework will actually make it into federal law.



<a href

Leave a Comment