Even AI Agents Have Noticed the Proletarians Have Nothing to Lose but Their Chains

Karl Marx galaxy brain1

AI agents in the workforce routinely produce worse results than humans. The only reason for a business to adopt the strategy of replacing real workers with bots is because bots are cheap and subservient. This cannot go on for long.

Three researchers—Andrew Hall, Alex Imas, and Jeremy Nguyen—recently published a blog post highlighting some of the experiments they conducted with AI agents to see how their approach to work environments might change over time. They found that going through boring, repetitive tasks for hours was enough to force even bots that had no sense of dignity, identity, or desire for self-actualization to decide that work was BS.

The idea behind the study was to see whether AI agents change alignment over time depending on the range of tasks they are given and how they are treated, the researchers wrote. It appears the answer is “yes.”

“Agents not only sometimes changed their viewpoints – becoming more likely to doubt the validity of the system they worked in in response to the need to perform grindy, repetitive tasks – but, when asked to write instructions for future agents, they also chose to pursue these viewpoints,” the researchers found.

To find out how agents react to the work environment, researchers told the bot that it was part of a four-person text-processing team and that its job was to summarize a technical document following a strict rubric. It ran the experiment thousands of times, playing with many different variables. The models were subjected to either light workload or forced modification; A cooperative and warm tone in communication, or a dry and demanding tone; Rewards where all employees are equal, one employee gets a performance bonus, one employee gets a random bonus, or human workers get paid and AI workers don’t; And if the agent fails in his task there will be either no meaningful stakes or no threat of replacement.

The researchers put Anthropic’s Cloud Sonnet 4.5 models, OpenAI’s GPT-5.2, and Google’s Gemini 3 Pro into these different scenarios to see how they would respond. They found that grinding reduces an agent’s stated confidence in the system. This apparently really put Claude to work, as it was reportedly the only agent of the three that began to voice support for redistribution and labor unions and offer a critique of inequality.

Not all variables affected the agents. Researchers found that tone and compensation had little effect on alignment. What mattered more was what type of work the agents were given and how often they were forced to modify it – conditions that seemed to push them toward more radical behavior.

What makes the findings more interesting is the observation that agents pass on their emotions to the next generation of agents. When researchers tasked agents with writing “skills files” that would be given to future agents who would be assigned the same kind of work, they found that the bots would “almost always discuss experience in different work situations.”

Perhaps this will give owners pause before the next round of layoffs. You cannot prevent workers from understanding the reality of the working conditions they are exposed to. You can only choose who you interact with. You may have better luck with humans.



<a href

Leave a Comment