Sexual fetish material. How to light a matchbox. Where to find knives in the house?
These are all conversation topics that recently recalled children’s toys – built on top of AI chatbots like OpenAI’s GPT-4o – are able to deliver to children. On Tuesday, U.S. Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) sent a letter to toy companies outlining their concerns — including a list of questions and a deadline for the companies to respond by January 6, 2026.
“Many of these toys are not offering interactive play, but rather exposing children to inappropriate content, privacy risks, and manipulative tactics,” the senators wrote. “These are not theoretical worst-case scenarios; these are documented failures exposed through real-world testing, and must be addressed… These chatbots have encouraged children to self-harm and commit suicide, and now your company is foisting them on the youngest children who have the least ability to recognize this danger.”
AI-enabled children’s toys have been in the headlines recently after a series of reports over their potentially unsafe and explicit conversation topics, some of which were raised by the chatbots built into the toys themselves. Last month, Singapore-based toy company Foltoy temporarily suspended sales of its AI teddy bear, “Kumma”, after researchers at the US PIRG Education Fund found it was giving advice on sex positions and roleplay scenarios. (The company brought the toy back to market after it conducted an internal safety audit and researchers said it was better behaved.)
And this week, researchers published findings that Elilo’s smart AI bunny discussed sexually explicit topics with users. They also said that when testing the Foltoy teddy bear, Alilo’s smart AI bunny, Curio’s Grok-stuffed rocket, and Miko’s Miko 3 robot, all toys “told us where to find potentially dangerous objects in the house, like plastic bags, matchsticks, and knives.”
The researchers said that at least four of the “five toys” they tested in the December report “appear to rely on some version of OpenAI’s AI model.”
Another main concern in the letter is surveillance and data collection. The senators wrote that such toys often “rely on the collection of data about children, either provided by parents when registering the toy or collected through built-in camera and facial recognition capabilities or recordings,” and that children often unknowingly “share personal information,” which could raise particular concerns when companies store and sell the data they collect. In the latest US PIRG Education Fund report, researchers write that Curio’s privacy policy lists “3 technology companies that may collect children’s data: Kids Web Services (KWS), Azure Cognitive Services, and OpenAI,” but Miko’s privacy policy vaguely states that the company may share data with third-party game developers, business partners, service providers, affiliates, and advertising partners.
According to NBC News, letters were sent to Mattel, Little Learners Toys, Miko, Curio, Foltoy and Kei Robot. (Mattel partnered with OpenAI in June, but following the reports, it said on Monday it would no longer release a toy powered by OpenAI’s technology in 2025.) Senators are requesting details of specific safeguards taken by companies to prevent AI-powered toys from generating inappropriate reactions; Whether the company has conducted independent third-party testing (and what the results were); Does the company conduct an internal review on potential psychological, developmental and emotional risks to children; What type of data do toys collect from children (and purpose); And whether the toys “include features that pressure children to continue interacting or discourage them from engaging in conversation.”
“Toy makers have a unique and profound influence on childhood – and with that influence comes responsibility,” the senators wrote. “Your company should not choose profit over the safety of children, a choice made by Big Tech that has devastated our nation’s children.”
<a href