California Could Get a 4-Year Ban on Toys With AI Chatbots

teddy bear AI

According to a new report from TechCrunch, California State Senator Steve Padilla, a Democrat from San Diego, introduced a bill in the California State Assembly on Monday that would impose a 4-year ban on the sale of toys with artificial intelligence chatbot capabilities to children under 18. The goal of the legislation, known as Senate Bill 867, is to provide enough time for the development of safety rules to protect children from AI-powered toys that engage in inappropriate interactions and tell children how to harm themselves.

“Chatbots and other AI tools may become integral parts of our lives in the future, but the threats they pose now require us to take bold action to protect our children,” Senator Padilla said in a statement posted online.

“Our safety regulations regarding this type of technology are in their infancy and will need to rapidly evolve in line with the capabilities of this technology. Stopping the sale of these chatbot integrated toys gives us time to create appropriate safety guidelines and frameworks for these toys to follow. Our children cannot be used as lab rats for Big Tech experiments,” Padilla continued.

Several horror stories have emerged in recent months of AI-enabled toys interacting with children inappropriately. Follotoy, which creates a teddy bear named Kumma, started talking about sexual relationships with children last year until OpenAI closed its access to GPT-4O. The teddy bear will also tell children where to find knives.

Mattel announced a partnership with OpenAI in June 2025, under which the company was to create an AI-assisted toy, but this has not happened yet. Consumer advocacy group Public Interest Group Education Fund also tested some AI toys and found that many had limited parental controls and could tell children where to find dangerous items like guns and matches. One of the main findings is that the longer a person interacts with an AI toy, the faster the guardrails fail.

AI chatbots have recently come under criticism in various contexts, especially after several people have taken their own lives after connecting with them. Gizmodo last year filed a Freedom of Information Act request with the Federal Trade Commission over consumer complaints about OpenAI’s ChatGPIT that included examples of AI-induced psychosis. A complaint from a woman in Utah described how the chatbot instructed her son not to take the drug and insisted that his parents were dangerous. Putting that kind of ability into a teddy bear would obviously create even bigger problems.

President Donald Trump last month issued an executive order that prohibits states from passing their own laws to regulate AI. And while Trump’s power to do so with an executive order is questionable in itself, that question aside, the EO does provide exceptions for laws related to child protection protections.

It’s unclear whether Padilla’s new law will pass. But even if it passes through the California State Assembly, it could be vetoed by Governor Gavin Newsom, a Democrat who is an ally of Big Tech and loves to veto bills that could be great for humanity. In October, Newsom vetoed the No Robo Boss Act, which would have barred companies from automating firing and discipline decisions for workers.



<a href

Leave a Comment