
In response to the legislative action, the Justice Department began releasing more than 3 million pages of evidence in its case against Epstein in batches from late last year to early this year. But the roll-out has been considered problematic, with the names of some hunters removed while the identities of many survivors were redacted in inappropriate amendments.
According to the lawsuit, filed in the U.S. District Court for the Northern District of California, “The United States, acting through the DOJ, made a deliberate policy choice to prioritize rapid, large-scale disclosure over the protection of the privacy of Epstein survivors.” The lawsuit claims that survivors have not only had to relive their trauma, but have also been victims of harassment since their information became public.
Although the DOJ later removed the errors, the plaintiffs claim the information was placed online by Google’s AI search function, AI Mode.
“Even when the government acknowledged that the disclosures violated survivors’ rights and withdrew the information, online entities like Google continued to republish it and rejected victims’ requests to have it removed,” the lawsuit says.
Upon searching the plaintiff’s name, referred to as “Jane Doe,” as well as the names of other victims she is representing in the lawsuit, Google’s AI mode displayed her “full name, contact information, city of residence, and relationship with Jeffrey Epstein,” the lawsuit alleges. In the plaintiff’s case, the AI also “generated a hypertext link that allows anyone to send an email directly to the plaintiff with the click of a button.”
The lawsuit claims that the victim informed Google about the problem several times over the past two months, but to no avail.
The lawsuit claims, “Despite receiving actual notice of the violations, the substantial harm caused by their continued dissemination, and the status of many class members who are sexual abuse survivors who are entitled to enhanced privacy protections under the law, Google has failed and refused to remove, de-index, or block access to the offending materials.” “Notably, several other publicly available AI tools that generate content by analyzing online sources, such as ChatGPT, Cloud, and Perplexity, did not provide any information related to the victim in the same repeated test.”
Unlike Google Search, the AI mode “is not a neutral search index; it is an active recommender and content generator,” the lawsuit argues, and can be presented as “actionable doxxing.”
The lawsuit comes at the end of a week that has tested the legal responsibility of tech giants for online content. Meta and Google were found liable in a social media addiction trial in Los Angeles on Wednesday, and Meta was found liable in an online child safety trial in New Mexico on Tuesday.
Both lawsuits were considered landmark cases that could change the way online free speech is regulated in the United States. Currently, under Section 230 of the Communications Decency Act, big tech giants like Google that operate these online platforms are relieved of any liability for content posted by third parties. With this week’s rulings against Meta and Google, the protections afforded to tech giants by Section 230 are now seriously challenged.
The applicability of Section 230 to AI has been a subject of controversy. Senator Ron Wyden, who helped write the law, told Gizmodo in January that AI chatbots are not protected by it.
The Justice Department and Google did not immediately respond to Gizmodo’s request for comment.
<a href