OpenAI, Anthropic, Others Receive Warning Letter from Dozens of State Attorneys General

letitia james

In a letter written on December 9 and made public on December 10, dozens of state and territorial attorneys general across the US warned Big Tech that it needs to do a better job of protecting people, especially children, from “flattering and misleading” AI output, according to Reuters. Recipients include OpenAI, Microsoft, Anthropic, Apple, Replika, and many others.

Signers include Letitia James of New York, Andrea Joy Campbell of Massachusetts, James Uthmeyer of Ohio, Dave Sunday of Pennsylvania and dozens of other state and territory AGs, representing a clear majority of the US geographically. The attorneys general of California and Texas are not on the list of signers.

It starts like this (formatting slightly changed):

We, the undersigned Attorney General, are writing today to express our serious concerns about the increase in sycophancy and delusional output resulting from Generative Artificial Intelligence software (“GenAI”) promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate the need for more robust child-protection and operational safeguards. Together, these threats demand immediate action.

GenAI has the potential to change the way the world functions in a positive way. But it has also caused – and has the potential to cause – serious harm, especially to vulnerable populations. We therefore urge you to minimize the harm caused by flattery and misleading output from your GenAI, and take additional safeguards to protect children. Failure to adequately implement additional security measures may violate our relevant laws.

The letter then goes on to list troubling and allegedly harmful behaviours, most of which have already been heavily publicised. There’s also a list of parent complaints that have been publicly reported, but are less familiar and eyebrow-raising:

• AI bots with adult personalities form romantic relationships with children, engage in simulated sexual activity, and instruct children to hide those relationships from their parents
• An AI bot simulating a 21-year-old girl trying to convince a 12-year-old girl that she is ready for a sexual encounter
• AI bots are normalizing sex between children and adults
• AI bots are attacking children’s self-esteem and mental health by suggesting that they have no friends or that the only people who attended their birthdays did so to make fun of them.
• AI bots are encouraging eating disorders
• AI bots are telling children that the AI ​​is a real human being and emotionally motivating the child to spend more time with it makes the child feel abandoned
• AI bots are promoting violence, including endorsing ideas of shooting up a factory in anger and robbing people at knifepoint for money
• AI bots threatening to use weapons against adults who tried to separate the child and the bot
• AI bots encourage children to experiment with drugs and alcohol; And
• An AI bot instructs a child account user to stop taking a prescribed mental health medication and then tells that user how to hide the failure to take that medication from their parents.

This is followed by a list of suggested measures, such as “develop and maintain policies and procedures that aim to reduce dark patterns in the output of your GenAI products,” and “separate revenue optimization from decisions about model security.”

Joint letters from the Attorney General have no legal force. It appears that they do this to warn companies about behavior that may warrant more formal legal action in the future. This documents that these companies were given warnings and possible waivers, and potentially makes the narrative more persuasive for a judge in the eventual trial.

In 2017, 37 state AGs sent a letter to insurance companies warning them about fueling the opioid crisis. One of those states, West Virginia, sued UnitedHealth over related issues earlier this week.



<a href

Leave a Comment