AI denial is becoming an enterprise risk: Why dismissing “slop” obscures real capability gains

Denialism
Three years ago, Chat GPT was born. It stunned the world and sparked unprecedented investment and excitement in AI. Today, ChatGPIT is still a baby, but public sentiment around the AI ​​boom has become increasingly negative. The change began when OpenAI released GPT-5 this summer to mixed reviewsMostly from casual users who, surprisingly, judge the system based on its superficial flaws rather than its underlying capabilities.

Since then, pundits and influencers have declared that AI progress is slowing, scaling has “hit a wall”, and that the entire field is just another tech bubble inflated by loud hype. In fact, many influencers have resorted to the dismissive phrase “AI slop” to belittle the amazing images, documents, videos, and code that frontier AI models generate on command.

This viewpoint is not only wrong but also dangerous.

This makes me wonder, where were all these “experts” on the irrational technology bubble when electric scooter startups were being touted as a transportation revolution and cartoon NFTs were being auctioned for millions? They were probably very busy shopping wasteland in the metaverse Or adding to their position in GameStop. But when it comes to the AI ​​boom, the most important technological and economic change agent of the last 25 years, journalists and influencers can’t spell the word “slop” enough times.

Do we protest too much? After all, by any objective measure AI is wildly more capable than the vast majority of AI. Computer scientists had predicted five years in advance And it’s still improving at an astonishing pace. The impressive leap demonstrated by Gemini 3 is the latest example. Additionally, McKinsey recently reported that 20% of organizations already derive real value from GenAI. Furthermore, a recent survey Deloitte indicates that 85% of organizations increased their AI investments in 2025, and 91% plan to increase again in 2026.

It doesn’t fit the “bubble” narrative and the dismissive “slop” language. As a computer scientist and research engineer who began working with neural networks in 1989 and has tracked progress through cold winters and hot booms since then, I find myself amazed almost every day by the rapidly growing capabilities of frontier AI models. When I talk to other professionals in this field, I hear similar sentiments. if anything, rate of ai advancement Many experts are feeling overwhelmed and frankly somewhat scared.

The dangers of AI denial

So why is the public believing that AI is faltering, that output is “sloping,” and that the AI ​​boom lacks authentic use cases? Personally, I believe it’s because we have come to a collective AI denialHolding the narratives we want to hear in the face of strong evidence to the contrary. Denial is the first stage of grief and thus is a reasonable response to the very upsetting possibility that we humans may soon be lost. cognitive supremacy Here on planet Earth. In other words, the exaggerated AI bubble narrative is a social defense mechanism.

Trust me, I got it. I’ve been warning about it destabilizing risks And discouraging effect of superintendence For more than a decade, and I also think AI is getting smarter very quickly. The fact is, we are rapidly moving toward a future where widely available AI systems will be able to outperform most humans at most cognitive tasks, solving problems faster, more accurately and yes, more creatively As much as any person can. I emphasize “creativity” because AI deniers often insist that certain human qualities (especially creativity and emotional intelligence) are inherent in AI., It will always remain out of reach of the AI ​​system. Unfortunately, there is little evidence supporting this perspective.

On the creativity front, today’s AI models can produce content faster and with more variety than any human. Critics argue that true creativity requires intrinsic motivation. I agree with that argument but find it circular – we are defining creativity based on how we experience it, rather than the quality, originality or utility of the output. Furthermore, we do not yet know whether AI systems will develop intrinsic drives or a sense of agency. Either way, if AI can produce basic work that rivals most human professionals, impact on creative jobs Would still be pretty devastating.

AI manipulation problem

Our human edge around emotional intelligence is even more uncertain. It’s likely that AI will soon be able to read our emotions faster and more accurately than any human. tracking subtle signals In our subtle expressions, vocal patterns, posture, gaze and even breathing. And as we integrate AI assistants into our phones, glasses, and other wearable devices, these systems will monitor our emotional reactions throughout our day, building predictive models Of our behaviour. Without strict regulation, which is becoming increasingly unlikely, these predictive models could be used to target us individually customized effects that maximizes persuasion.

it is called AI manipulation problem And it suggests that emotional intelligence may not benefit humanity. In fact, it may be a significant weakness, promoting a asymmetric dynamic Where AI systems can read us superhuman accuracyWhereas we can’t read AI at all. When you talk to photorealistic AI agents (and you will do) You’ll see a smiling mask designed to appear warm, sympathetic, and trustworthy. It will look and feel human, but it is just an illusion, and it can easily influence your attitudeAfter all, our emotional reactions are to faces, visceral reflexes It has been shaped by millions of years of evolution on a planet where every interactive human face we encountered was actually human. Soon, this will no longer be true.

We are rapidly moving towards a world where many of the faces we face will be AI agents hidden behind digital facadesIn fact, this “virtual spokesperson“There could easily be appearances that are designed for each of us based on our prior reactions – whatever tempts us to let our guard down. And yet many insist that AI is just another technological cycle.

This is wishful thinking. The huge investment being made in AI is not driven by hype – it is driven by the expectation that AI will permeate every aspect of daily life, as intelligent actors we will engage with throughout our days. These systems will be help usteach us and impress usThey will reshape our lives, and it will happen faster than most people think,

To be clear, we are not seeing the AI ​​bubble being filled with empty gas. We are watching a new planet form, a molten world is rapidly taking shape, and it will solidify A new AI-powered society. Denial won’t stop it. This will only make us less prepared for risks.

Louis Rosenberg An early pioneer of augmented reality and longtime AI researcher.



<a href

Leave a Comment