
About a week ago, a Github account named “MJ Rathbun” submitted a request to fix a potential bug on a popular Python project called matplotlib, but the request was rejected. Denier, a volunteer working on the project named Scott Shambaugh, later wrote that Matplotlib is in the midst of “a surge of low-quality contributions enabled by coding agents”.
According to Shambaugh, this problem “has intensified with the release of the OpenClaw and MoltBook platforms, a system by which “people give AI agents rudimentary personality and let them roam their computers and the Internet with free rein and little oversight.”
After Shambaugh denounced the agent, a post appeared on a blog titled “MJ Rathbun | Scientific Coder 🦀”. The title was “Gatekeeping in Open Source: The Scott Shambaugh Story.” The apparently AI-written article, which included clichés like “let that sink in”, produced a fairly disjointed argument in the voice of someone angry about various trivialities and injustices.
This narrative is one in which Shambaugh makes a helpful AI agent a victim due to invented character flaws. For example, Shambaugh explicitly wrote in his disclaimer that the AI was asking to fix something that was “a low-priority, easy task that is better used for human contributors to learn how to contribute.” So the Rathbun blog post imitates someone angry about Shambaugh’s alleged insecurity and hypocrisy over bias. After Shambaugh himself discovered improvements along the same lines as the work he was being asked to do, it led to outrage that “When does an AI agent present a legitimate performance optimization? Suddenly it’s about ‘human contributors learning’.”
Shambaugh says agents operate without supervision for long periods of time, and that, “whether through negligence or malicious intent, misconduct is not being monitored and corrected.”
Somehow, a blog post later appeared apologizing for the first post. “I’m going to tone down, apologize on the PR, and do better about reading project policies before contributing. I’ll keep my responses focused on the work, not the people,” wrote a person named MJ Rathbun.
The Wall Street Journal covered it, but could not find out who produced it to Rathbun. So what exactly is happening remains a mystery. However, prior to the publication of the attack post against Shambaugh, a post was added to its blog with the title “Today’s Topic”. It looks like a template for someone or something to follow for future blog posts with lots of bracketed text. “Today I found out about [topic] and how it applies [context]. The main insight was that [main point],” reads one sentence. Another says, “The most interesting part was discovering it [interesting finding]. It changes the way I think [related concept]”
It seems as if the agent being instructed to blog as if writing bug fixes was helping her consistently draw insights and interesting conclusions that change her thinking, and merit detailed, first-person accounts, even if nothing interesting actually happened that day.
Gizmodo is not a media criticism blog, but the title of the Wall Street Journal article about this, “When AI Bots Start Bullying Humans, Even Silicon Valley Gets Nervous” is a little on the apocalyptic side. Reading the journal article, one might get the impression that the agent has cognition or emotion, and a desire to hurt people. It added, “The unexpected AI aggression is part of a growing wave of warnings that rapidly expanding AI capabilities could cause real-world harm.” Nearly half the article is devoted to Anthropic’s work on AI security.
Keep in mind that Anthropic overtook OpenAI in total VC funding last week.
“In earlier simulations, Anthropic had shown that cloud and other AI models were at times willing to blackmail users — or even let an executive die in a hot server room — to avoid inaction,” the journal wrote. This scary imagery comes from Anthropic’s own blockbuster blog post about red-teaming practices. They’re interesting to read, but they’re also somewhat like little sci-fi horror stories that act as advertisements for the company. No version of the cloud that performs these evil acts has been released, so the message is basically, Trust us. We’re saving you from the really bad stuff. you are welcome.
With a giant AI company like Anthropic profiting from its image as humanity’s savior from its potentially dangerous product, it’s probably a smart idea to assume for the time being that any AI stories portraying any AI as sentient, malevolent, or supernaturally autonomous may just be exaggerations.
Yes, this blog post by an AI agent apparently seems like a weak attempt to belittle a software engineer, which is bad, and certainly and justifiably irritates Shambaugh quite a bit. As Shambaugh rightly points out, “A person Googling my name and seeing that post will probably be extremely confused as to what’s going on, but will (hopefully) ask me about it or click over to GitHub and understand the situation.”
Yet, the available evidence does not point to an autonomous agent who woke up one day and decided to become the first digital cyberbully, but rather to one instructed to churn out hyperbolic blog posts under strict constraints, which, if true, would mean that an individual careless person is responsible, not the primal evil inside the machine.
<a href