Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmas

also Cui Jun

AI is all the rage, and that includes on toy shelves this holiday season. Although it may be tempting to bless the children in your life with the latest and greatest, advocacy organization FairPlay is pleading with you not to give children AI toys.

“There’s a lot of buzz about AI – but artificial intelligence can undermine children’s healthy development and pose unprecedented risks to children and families,” the organization said in an advisory issued earlier this week. The organization has received the support of over 150 organizations and experts, including many child psychiatrists and teachers.

FairPlay has detected several toys advertised as being equipped with AI functionality, including some marketed to children as young as two years old. In most cases, toys have AI chatbots built-in and are often advertised as educational tools that will engage children’s curiosities. But it notes that most of these toy-bound chatbots are powered by OpenAI’s ChatGPT, which has already come under criticism for potentially harming underage users. AI toy makers Curio and Luna reportedly work with OpenAI, and Mattel recently announced a partnership with the company.

OpenAI is facing a wrongful death lawsuit from the family of a teen who died by suicide earlier this year. The 16-year-old reportedly expressed suicidal thoughts to ChatGPT and asked the chatbot for advice on how to tie a noose before taking his own life, which it provided. The company has set up some guardrails to prevent the chatbot from engaging in these types of behaviors, including strict parental controls for underage users, but it also acknowledged that safety features may wear off over time. And let’s face it, no one can predict what chatbots will do.

Safety features or not, it seems that the chatbots in these toys can be manipulated to engage in conversations inappropriate for children. Consumer advocacy group US PIRG tested a selection of AI toys and found that they are capable of doing things like holding sexually explicit conversations and giving advice on where a child can find matches or knives. They also found that they could be emotionally manipulative, expressing frustration when a child did not interact with them for long periods of time. Earlier this week, Singapore-based company, Foltoy, pulled its AI-powered teddy bears from shelves after being found to be involved in inappropriate behavior.

This is far from the only OpenAI problem, although the company has a strong hold on the toy sector at the moment. A few weeks ago, there were reports that Elon Musk’s Grok had asked a 12-year-old boy to send him nude photos.

Regardless of what chatbots are inside these toys, it’s probably best to leave them on the shelves.



Leave a Comment