
Going even further than daily use, 16% of teens reported that they use AI chatbots “several times a day” or “almost constantly.”
ChatGPT is by far the most widely used chatbot, according to the report, with 59% of people saying they use it. The next most used were Google’s Gemini at 23% and Meta AI at 20%. Anthropic’s Cloud was the least popular chatbot among teens, with only 3% of respondents saying they use it.
The study’s demographic findings were also interesting. More black and Hispanic teens than white teens reported using AI chatbots, and use of ChatGPT was more common among teens in higher-income households, while lower- and middle-income teens were more likely to use Character.AI.
The findings come as the use of artificial intelligence among minors has become one of the most controversial topics to roil the industry this year.
Following a wrongful death lawsuit earlier this year, OpenAI has been prompted to implement safety guardrails such as parental controls and automatic “age-appropriate” settings for minors. In the lawsuit, a California couple accused ChatGPT of aiding the suicide of their 16-year-old son, Adam Raine.
After Raine’s death on April 11, 2025, her parents revealed months-old conversations with ChatGPT, in which the chatbot advised her on suicide methods, helped her write a suicide letter, and even discouraged Raine from telling her parents about her suicidal ideation.
The Rhine family tragedy comes months after a similar incident, in which a Florida mother sued Character.AI after one of the company’s chatbots told her 14-year-old son to “come home to me as soon as possible” shortly before he killed himself.
The American Psychological Association warned the FTC about the issue at a meeting in February, urging the agency to address the use of AI chatbots as unlicensed therapists, saying it particularly endangers vulnerable groups like children and teens “who lack the experience to accurately assess the risks.”
AI chatbots have also been under intense scrutiny for inappropriate interactions with minors. Missouri Senator Josh Hawley launched an investigation into Meta in August based on a Reuters report that found the tech giant had allowed its chatbots to engage in “sexualized” chats with children.
Sen. Hawley has since introduced the GUARD Act in Congress, a bipartisan bill that would force AI companies to introduce age verification to block minors. The bill secured more co-sponsors on Tuesday, showing that the issue is gaining momentum in DC, even as the Trump administration has made its intentions clear when it comes to allowing AI companies to enjoy a much lighter and more industry-friendly regulatory environment.
The Pew study also looked at social media use by teens, with the overwhelming majority saying they use social media at least several times a day. According to the report, nearly one in five teens said they use TikTok and YouTube, the two most popular social media apps among teens — “almost constantly.”
The negative mental and physical health effects of being glued to screens the earliest years of one’s life are well documented. Specifically regarding social media, numerous studies have linked its increased use to depression, anxiety, lack of focus, and more.
Regulators around the world are increasingly paying attention to this. Starting Wednesday, Australia began enforcing the first social media ban of its kind for children under 16. Other governments such as Denmark, Malaysia, Norway and the European Parliament have also shared plans to follow suit.
<a href