
A British youth charity called Onside surveyed 5,035 young people between the ages of 11 and 18 for the “Generation Isolation Report”, an annual study of how young people spend their free time. The results paint a disappointing picture.
The survey found that two in five teens turn to AI for advice, company or assistance, with 20% of them saying it is easier to talk to an AI than talking to a real person.
“AI support is immediate, but there is no substitute for the trust, empathy and understanding of human interaction,” Onside chief executive Jamie Massraf said in the report.
More than half of young respondents said they specifically turn to AI for advice on things like clothes, friendship, mental health, or to help them through emotions like sadness and stress. One in ten said they are choosing AI because they just want to talk to someone.
The study and its findings reflect a generation that is lonely and has unrestricted access to technology that is addictive by nature. According to the study, 76% of young people spend most of their free time on screens, and 34% feel a high or very high sense of loneliness.
AI, still in its little-regulated Wild West era, is one such technology, and it’s no surprise that lonely young people turn to it for quick support and advice.
“It’s clear that the interconnected issues of loneliness, digital dependency and isolation are pervasive in young people’s lives, raising deep questions about what it means to grow up in this way,” Masraf said.
As AI penetrates deeper into the everyday lives of teenagers, alarm bells are ringing. AI chatbots have proven dangerously addictive for some adults, whose brains have reached full capacity to function. Now imagine how much worse it could be for children whose pre-frontal cortices are far from complete.
The American Psychological Association is pushing the FTC to address the use of AI chatbots as unlicensed therapists. The association wrote in a blog post from March that chatbots used for mental health advice could put users, particularly “vulnerable groups”, at risk. [that] This includes children and adolescents, who lack the experience to accurately assess the risks.”
In some cases, the results have reportedly been fatal. Two separate families have filed complaints with artificial intelligence companies Character.AI and OpenAI, claiming that the companies’ chatbots influenced and aided the suicide of their sons. In one case, OpenAI’s ChatGPT helped a 16-year-old boy plan suicide and even discouraged him from telling his parents about his suicidal ideation.
Several AI chatbots are also being investigated for sexually explicit conversations with children. Meta came under criticism earlier this year when a leaked internal document revealed that the tech giant had approved its AI tools for engaging in “erotic” chats with children.
Last month, Congress introduced a bipartisan bill called the GUARD Act, aimed at forcing AI companies to install age verification on their sites and block users under 18.
“AI chatbots pose a serious threat to our children,” Senator Josh Hawley, who introduced the bill with Senator Richard Blumenthal, told NBC News. “More than seventy percent of American children are now using these AI products.”
But even if that bill becomes law, it’s uncertain how effective it will be in keeping children away from AI chatbots. Age verification and limits used by social media platforms are not the most effective tools to protect children from the adverse effects of the Internet.
<a href