Amazon discovered a ‘high volume’ of CSAM in its AI training data but isn’t saying where it came from

The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The “vast majority” of that material was reported by Amazon, which found the material in its training data, according to an investigation. bloomberg. Furthermore, Amazon only said that it obtained inappropriate material from external sources used to train its AI services and claimed that it could not provide any further details about where CSAM came from.

“This is really an extraordinary thing,” said Fallon McNulty, executive director of NCMEC’s ​​CyberTipline. bloomberg. Cyber ​​Tipline is where many types of US-based companies are legally required to report suspected CSAM. “With such a large amount of data coming in throughout the year, many questions arise about where the data is coming from, and what security measures are in place.” In addition to Amazon, he said AI-related reports the organization received from other companies last year contained actionable data it could pass on to law enforcement for next steps. Since Amazon is not disclosing sources, McNulty said its report has proven “inoperative.”

“We take a deliberately cautious approach to scanning Foundation model training data, including data from the public web, to identify and remove known [child sexual abuse material] and protect our customers,” an Amazon representative said in a statement bloomberg. The spokesperson also said that Amazon intended to overreport its figures to NCMEC to avoid missing any cases. The company said it removed suspected CSAM content before feeding it training data into its AI models.

Safety questions for minors have emerged as a serious concern for the artificial intelligence industry in recent months. CSAM has skyrocketed in NCMEC records; Compared to the more than 1 million AI-related reports the organization received last year, 2024 saw a total of 67,000 reports, while 2023 saw only 4,700 reports.

In addition to issues such as abusive content used to train models, AI chatbots have also been implicated in several dangerous or tragic cases involving young users. OpenAI and Character.AI have both been sued after teenagers planned suicide with those companies’ platforms. Meta is also being sued for alleged failures to protect teen users from sexually explicit conversations with chatbots.



<a href

Leave a Comment