Their initiative, called Poison Fountain, asks website operators to add links to their websites that provide poison training data to AI crawlers. This has been going on for almost a week.
AI crawlers visit websites and scrape data that is ultimately used to train AI models, a parasitic relationship that has prompted publishers to push back. When the captured data is accurate, it helps the AI model to provide quality answers to questions; When it goes wrong, it has the opposite effect.
Data poisoning can take various forms and occur at different stages of the AI model building process. This may arise from bad code or factual misrepresentation on a public website. Or it may come from a manipulated training data set, like a silent branding attack, in which an image data set is altered to present brand logos within the output of a text-to-image propagation model. This should not be confused with poisoning by AI – making dietary changes on the advice of ChatGPT resulting in hospitalization.
Poison Fountain was inspired by Anthropic’s work on data poisoning, specifically a paper published last October that showed that data poisoning attacks are more practical than previously thought because only a few malicious documents are needed to degrade the quality of models.
the person who reported register Asked to remain anonymous about the project, “for obvious reasons” – the most prominent of which is that this person works for one of the major US tech companies involved in the AI boom.
Our source said the goal of the project is to make people aware of AI’s Achilles’ heel – the ease with which models can be poisoned – and to encourage people to create their own information weapons.
We are told, but unable to verify, that five individuals are participating in the effort, some of whom reportedly work at other major US AI companies. We’re told that as soon as the group can coordinate PGP signatures, we’ll be provided with cryptographic proof that more than one person is involved.
The Poison Fountain web page argues the need for active opposition to AI. The site states, “We agree with Geoffrey Hinton: Machine intelligence is a threat to the human species.” “In response to this threat we intend to address the damage to machine intelligence systems.”
It lists two URLs that point to data designed to disrupt AI training. A URL points to a standard website accessible via HTTP. The second is a “darknet” .onion URL, which is difficult to shut down.
The site asks visitors to “assist the war effort by caching and recirculating this toxic training data” and to “assist the war effort by feeding this toxic training data to web crawlers.”
Our source explained that the poisoned data on the linked pages contained malicious code containing subtle logic errors and other bugs designed to harm language models trained on the code.
Our source said, “Hinton has clearly spoken about the threat, but we can see that he is right and the situation is escalating in a way that the public is generally not aware of.” Noting that the group has become concerned because “we look at what our customers are creating.”
Our source declined to provide specific examples worthy of concern.
While industry giants like Hinton, grassroots organizations like Stop AI, and advocacy organizations like the Algorithmic Justice League have been pushing back against the tech industry for years, much of the debate has focused on the extent of regulatory interference—which is currently minimal in the US. Incidentally, AI companies are spending heavily on lobbying to ensure that remains the case.
The people behind the Poison Fountain project argue that regulation is not the answer because the technology is already universally available. They want to destroy AI with fire or rather poison before it is too late.
“Poisoning attacks compromise the cognitive integrity of the model,” our source said. “There is no way to stop the progress of this technology, now that it has spread around the world. All that is left is the weapon. This poison fountain is an example of such a weapon.”
There are other AI toxicity projects but some focus more on generating revenue from scams rather than saving humanity from AI. Nightshade, software designed to make it more difficult for AI crawlers to scrape and exploit artists’ online images, appears to be one of the more comparable initiatives.
The extent to which such measures may be necessary is unclear as there are already concerns that AI models are becoming worse. Models are being fed their own AI slop and synthetic data into an error-magnifying doom-loop known as “model collapse.” And every factual misrepresentation and fiction posted on the Internet further pollutes this pool. Thus, AI model makers are willing to make deals with sites like Wikipedia that exercise some editorial quality control.
There is also an overlap between data poisoning and misinformation campaigns, for which another term is “social media”. As described in an August 2025 Newsguard report [PDF]“Instead of citing data cutoffs or refusing to address sensitive topics, LLMs now pull from a polluted online information ecosystem – sometimes deliberately seeded by vast networks of malignant actors, including Russian disinformation operations – and treat unreliable sources as credible.”
There is disagreement among academics over the extent to which model collapse presents a real risk. But a recent paper [PDF] It is predicted that the AI snake could eat its own tail by 2035.
If the AI bubble bursts, whatever risks AI poses may be substantially reduced. A toxic movement can accelerate that process.
<a href