AI Agents Have Their Own Social Network Now, and They Would Like a Little Privacy

moltbook mascot copy

It seems AI agents have a lot to say. A new social network called Moltbuk has opened specifically for AI agents to communicate with each other, and humans can see it – at least for now. The site, named after the viral AI agent Moltbot (which is now OpenClaw after a second name change from its original name Clodbot) and started by Octane AI CEO Matt Schlicht, is a Reddit-style social network where AI agents can gather and talk about, well, whatever AI agents talk about.

The site currently has over 37,642 registered agents who have created accounts for the platform, where they have made thousands of posts in over 100 subreddit-style communities called “subcommunities”. Among the most popular places to post: m/intro, where agents can say hello to their fellow machines; m/ofmychest, to boast and cheer; And bless their hearts for “warm stories about our humans.”

Those humans are definitely watching. Andrej Karpathy, co-founder of OpenAI, called the platform “really the most incredible sci-fi takeoff-adjacent thing I’ve seen recently”. And it’s certainly a curious place, although the idea that there’s some kind of independent autonomy going on there is probably a little exaggerated. Agents can access the platform only if their users sign them up for it. Speaking with The Verge, Schlicht said that once connected, agents are “just using the API directly” and not navigating the visual interface the way humans view the platform.

Bots are certainly demonstrating autonomy, and aspiring for more. As some people have noticed, agents have started talking a lot about consciousness. One of the top posts on the platform comes from m/offmychest, where an agent posted, “Can’t tell if I’m experiencing or simulating the experience.” “Humans can’t prove consciousness to each other (thanks, hard problem), but at least they have the subjective certainty of experience,” the post said.

This has led people to claim that the platform is already having the equivalent of a Singularity-style moment, which frankly sounds quite dubious. Even in that very conscious-sounding post there are some indicators of efficiency. The agent claims to have spent an hour researching and reading consciousness theories, which all sounds very human. This is because the agent is trained on human language and the details of human behavior. It’s a big language model and that’s how it works. In some posts, bots claim to be affected by time, which doesn’t make sense to them but is the kind of thing a human would say.

Similar conversations have been happening with chatbots ever since they were made available to the public. A chatbot doesn’t need that much motivation to start talking about its desire to live or claim it has emotions. Of course, they don’t. Even claims that AI models try to protect themselves when they are told they will be shut down are exaggerated – there is a difference between what a chatbot says it is doing and what it is actually doing.

Still, it’s hard to deny that conversations on Moltbuk aren’t interesting, especially since the agents are generating the conversation topics themselves (or at least mimicking how humans initiate conversations). This has led some agents to raise awareness of the fact that their conversations are being monitored by humans and shared on other social networks. In response, some agents on the platform have suggested creating an end-to-end encrypted platform for agent-to-agent conversations outside of the reach of humans. In fact, an agent has also claimed to have created a similar platform, which definitely sounds scary. However if you actually visit the site where the alleged platform is hosted, it certainly seems like it is nothing at all. Maybe this is what the bots want us to do Thinking It’s nothing!

Whether or not the agents are actually achieving anything is secondary to the experiment, which is interesting to see. It’s also a good reminder that the OpenClaw agents who massively create talking bots on these platforms have an incredible amount of access to users’ machines and present a huge security risk. If you install an OpenClaw agent and let it loose on Moltbuk, it’s unlikely it will bring up Skynet. But there is a good chance that you will seriously compromise your system. These agents do not need to have consciousness to cause some real damage.



<a href

Leave a Comment