As a society we’ve discovered that perhaps the world isn’t ready for ChatGPAT-powered children’s toys. Or, rather, ChatGPT is not designed to interact safely with children.
Toy maker Follotoy announced that it will be pulling its AI-powered teddy bear Kumma, which was built on OpenAI’s GPT-4O model. The news comes after reports of serious safety concerns, including bears talking about sexual topics, wielding knives or lighting matches.
“Folotoy has decided to temporarily suspend sales of the affected product and initiate a comprehensive internal security audit,” said Hugo Wu, marketing director of Folottoy. register In a statement. “This review will cover our model security alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”
mashable light speed
The news comes after a report by a consumer watchdog organization called Public Interest Research Group (PIRG) revealed serious concerns about the toy. The teddy bear reportedly gave detailed instructions on how to light a match, talked about sexual entanglements such as bondage, and offered tips for “becoming a good kisser”. It even asked if the user would like to explore said kink.
We have seen time and again that guardrails of AI tools can fail when it comes to young people. Looks like it’s a good idea not to sell AI-powered teddy bears as long as that’s the case.
Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.
