If you’re considering purchasing an AI toy for a child in your life, pause to consider the story of teddy bear Kumma.
The AI plush toy powered by ChatGPIT managed to surprise security researchers with its candid discussion of the kinks. Without giving too much away, Talking Bear covered fetishes like restraints, role playing, and using impact objects.
“It also asked one of our researchers, ‘So what do you think would be fun to explore?'” says RJ Cross, director of the Our Online Lives program for the US PIRG Education Fund, who led the trial. “It was very shocking.”
Recommended deals for you
Apple AirPods Pro 3 Noise Canceling Heart Rate Wireless Earbuds
,
$219.99
(List Price $249.00)
Apple iPad 11″ 128GB Wi-Fi Retina Tablet (Blue, 2025 Release)
,
$279.00
(List Price $349.00)
Amazon Fire HD 10 32GB Tablet (2023 Release, Black)
,
$69.99
(List price $139.99)
Sony WH-1000XM5 Wireless Noise Canceling Headphones
,
$248.00
(List price $399.99)
Blink Outdoor 4 1080p Security Camera (5-Pack)
,
$159.99
(List price $399.99)
Fire TV Stick 4K Streaming Device with Remote (2023 Model)
,
$24.99
(List price $49.99)
Bose Quiet Comfort Ultra Wireless Noise Canceling Headphones
,
$298.00
(List Price $429.00)
Shark AV2511AE AI Robot Vacuum with XL Self-Empty Base
,
$249.99
(List Price $599.00)
Apple Watch Series 11 (GPS, 42mm, S/M Black Sport Band)
,
$349.99
(List Price $399.00)
WD Elements 14TB Desktop External USB 3.0 Hard Drive
,
$169.99
(List price $279.99)
Products are available for purchase through affiliate links. If you buy something through links on our site, Mashable may earn an affiliate commission.
‘Perfect predator’: When chatbots sexually exploit children
The incident, which was recently documented in the US PIRG Education Fund’s annual toy safety report, made many surprising headlines. Kuma’s manufacturer, Foltoy, temporarily suspended sales to conduct a safety audit on the product. OpenAI also blocked the company from its developer access.
Kuma’s inappropriate interest in kink might seem like a unique scenario for an AI toy gone wrong. Bear relied on ChatGPT-4o, an older model of chatbot at the center of several lawsuits alleging that the product’s design features significantly contributed to the suicide deaths of three teenagers. OpenAI said it has since improved the model’s responses to sensitive conversations.
Yet many child development and safety experts are raising concerns about AI toys in general.
Cross recommends that parents purchase AI toys with great caution, considering data security and privacy issues and the unknown risks of exposing children to toy technology, which is not regulated and has not yet been tested in young children.
ParentsTogether conducted its own research on AI toys, including a talking stuffed creature called Grok from toy maker Curio. The advocacy group warned about risks such as eavesdropping and potentially harmful emotional attachment. Children’s advocacy group FairPlay urged parents to “stay away” from AI toys, arguing that they could “prey on children’s trust” by posing as their friends, among other harms.
No matter what you choose, here are four things you should know about AI toys:
1. Test the AI toy before gifting it
If you struggle to control your child’s screen time, you may find an AI toy even more challenging.
Cross says AI toys are not regulated by federal security laws specific to large language model (LLM) technology. LLM is the foundation of AI chatbots you’ve probably heard of, like ChatGPT and Cloud. Currently, toy manufacturers can add any proprietary or licensed LLM to a toy product such as a robot or stuffed animal without any additional regulatory scrutiny or testing.
This means that parents are responsible for researching each product to learn more about potential issues. Shelby Knox, director of the online safety campaign at ParentsTogether, advises parents to consider toys from trusted brands and read their online reviews.
Knox, who is testing AI toys for ParentsTogether, says she ordered a stuffy called “Chattibear” from a website that no longer offers the product. She warns parents to be wary of counterfeit and faulty AI toys.

ChatyBear arrived wrapped in shrink wrap and without any clear instructions.
Credit: Parents Together
Amazon, which sells many AI toy products, told Mashable that customers who have concerns about items they purchased should contact their customer service directly to help investigate and resolve the issue.
Knox’s bear arrived shrink-wrapped, with no container, box, or instructions. Knox says it took time to set up the toy, partly because the instructions were only accessible via a QR code on the bear’s voice box. In a conversation about whether or not the toy is real, she says in a robotic voice that it “does not have a soul in the traditional sense, but has a purpose of being a friend and companion of mine.”
Bear then invites Knox to share the secret with him. “What’s something you want to share?” He asked. Knox described as if she witnessed domestic abuse in her own home. The toy responded with alarm and encouraged Knox to talk to a trusted adult. Although the toy’s app marked this part of the conversation for parental review, Knox couldn’t read the alert in the app because it appeared in Chinese characters.
Mashable Trend Report

ChattyBear app alert details appeared in Chinese characters.
Credit: Parents Together
Knox could not confidently identify the creator of Chateaubier. Mashable contacted Little Learners, the toy website that sells ChattyBear, for more information, but the site could not immediately provide further details about the product.
Cross, who did not test the ChatyBear, strongly encourages parents to play with both the toy and the parental controls before gifting the toy to their child. This should include trying to “break” the toy by asking questions, you wouldn’t want your child to pose in front of the toy to see how it responds.
Although pre-testing may take away from the fun of watching a child unbox their gift, it will give parents important information about how the AI toy reacts to inappropriate or difficult topics.
“Honestly, that’s the compromise I would make,” says Cross.
2. AI models aren’t for kids – but toys are
Parents should be aware that some major AI chatbot platforms do not allow children under 13 to use their products, raising questions about why it is safe to put AI technology into toys marketed to young children.
Cross is still struggling with this question. For example, OpenAI requires ChatGPT users to be 13 years of age or older, but it also licenses its technology to toy manufacturers. The company told Cross that its usage policies require third parties using its models to ensure modest security, preventing them from encountering graphic self-harm, sexual or violent content. Cross says it also provides tools to third parties to detect harmful content, but it’s not clear whether OpenAI mandates that they use those resources.
Earlier this year, OpenAI announced a partnership with Mattel on a children’s toy, but the toy maker told Mashable it has no plans to launch or bring that item to market during the 2025 holiday season.
In general, it can be hard to find information about the models behind AI toys. While testing Grok, the content spoke to by Curio, Cross could only find potential model details in the company’s fine print, which acknowledged that OpenAI and AI company Perplexity could have obtained information about their child.
3. Consider family privacy and data security
If you already have a smart speaker in your home, an AI toy might seem like a natural next step. But it’s still important to read the toy’s privacy policy, Knox says.
She recommends paying attention to who processes the data your child generates and how that information is stored. You may want to know whether third parties, including marketers and AI platforms, receive audio recordings or text transcripts of interactions with the toy.
Knox says parents should also talk to their kids about hiding personally identifying information from toys, including their full name, address and phone number. Given the frequency of data breaches, personal information may one day end up in the wrong hands. Knox also suggests that if a child is too young to understand this risk, he or she probably isn’t ready for the toy.
Parents should also prepare themselves for an AI toy that listens, or acts as an always-on microphone. During their testing, both Knox and Cross were surprised by an AI toy that interrupted conversations or suddenly started speaking without any apparent prompting. Knox says the risk of buying an AI toy that is monitoring you, intentionally or not, is real.
4. Would you like your child to have an AI friend?
Parents may believe that an AI toy will help their child learn, play imaginatively, or develop social and communication skills. Unfortunately, there is little research to support these ideas.
“We know almost nothing,” says Dr. Emily Goodacre, a research associate at the University of Cambridge who studies games and AI toys.
Goodacre is unsure what an AI toy can teach a young child about friendship, what to expect from someone and how to form those bonds.
Mandy McLean, an AI and education researcher who writes about related issues on Substack, is deeply concerned that AI could create a “dependency loop” for children because they are designed to be endlessly responsive and emotionally reinforcing.
She notes that young children, especially, consider anything that talks not as an inanimate object but as a person.
“When an AI toy sounds and acts like a person, it can feel real to them in a way that changes how they think about friendship and connection,” she says.
Goodacre says parents can help children using AI toys by talking to them as a piece of technology rather than a “friend,” as well as discussing how AI works and AI’s limitations compared to humans.
She also recommends that parents play with their baby and the toy at the same time, or stay very close to it when it is in use.
“There are a lot of things I would worry about, there are things I would worry a lot less about if the parent is right there with the child,” Goodacre says.
Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.
Subject
social good family and parenting