AI Slop Is Making the Internet Fake-Happy

To anyone with a pulse and a smartphone, it’s clear that the Internet has an AI slope problem. This problem has become even more serious since the launch of ChatGPT in 2022, with AI-generated writing flooding some social platforms. Now, there is data to support the anecdotal evidence.

A new preprint study published today by researchers at Imperial College London, Stanford University, and the Internet Archive found that about 35 percent of all new websites are either AI-generated or AI-supported. The same study also found that online writing is becoming “increasingly sanitized and artificially cheesy”. In other words, AI is making the internet fake-happy.

The research team tried four different methods for detecting AI before settling on Pangram Labs’ tool after it produced the most consistent results. (Although the team found it performed well in its tests, it’s worth noting that all artificial intelligence detection tools are imperfect.) To compile a representative sample of websites, it used the Internet Archive’s Wayback Machine, which collects snapshots of webpages. In addition to quantifying how many sites created between 2022 and 2025 are based on AI-generated writing, the study also tested six different theories about the characteristics of the slope.

The experiment, which looked at artificial happiness, examined how AI affected the tone of online writing. Using sentiment analysis, which classifies words as positive, neutral or negative, it was found that “the average positive sentiment score of AI-generated or AI-assisted websites was 107 percent higher than that of non-AI websites.” The researchers see this increase in artificial happiness as a “symptom” of “the sycophancy and overly optimistic nature of current LLMs”. In this way, the tendency of AI writing tools to attract their human users has the effect of making the overall appearance of online writing more sacred.

Another test examined whether an increase in AI-generated writing reduces “the range of unique ideas and diverse viewpoints” on offer. The researchers found that AI has made the Internet less ideologically diverse, with AI websites scoring about 33 percent higher in tests of “semantic similarity” than human-made websites.

While those two tests validated researchers’ assumptions about AI, others did not. The four theories tested by the researchers were not confirmed. Specifically, they suspected that AI would lead to an increase in misinformation, but their analysis of the evidence did not support the hypothesis. He also predicted that AI writing would not be tied to external sources, and would be stylistically more general than human writing. Confounding expectations, none of these theories were supported by evidence.

While the analysis found that the ideas supported by AI writing were more homogeneous – and notably, more consistently happy – writing Style The flattening itself was not confirmed. This was a big surprise to researchers, who had assumed they would see a clear move toward more normalized outputs. “Everyone on the team expected it to be true,” says Stanford researcher Matty Bohasek. “But we don’t have significant evidence for that.”

Before conducting their analysis, the research team conducted a survey on how people feel about AI. Comparing this to the results, it turned out that the researchers were not the only ones whose expectations were subverted. Their study showed that common misconceptions about AI writing are wrong.

Like the researchers, most of those surveyed also assumed that they would face an increase in fake news as the number of AI-generated websites increased. The vast majority of respondents also assumed that AI writing would stop being tied to external sources, and would have an increasingly generic, uniform voice. “It’s interesting to see that people expected the worst outcomes,” says Bohasek.

This study is far from the last word on what AI is doing to the Internet. “We just wanted to lay the groundwork,” says Bohasek, who sees it as a starting point for deeper exploration. As a snapshot of the impact of AI sloppiness, it offers a particularly human flavor of insight: Sometimes, it’s hard to predict how things will turn out.



<a href

Leave a Comment