An open source AI assistant called Moltbot (formerly “Clodbot”) recently surpassed 69,000 stars on GitHub after just one month, making it one of the fastest growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through the messaging apps they already use. While some say it sounds like the AI assistant of the future, running the tool as currently designed comes with serious security risks.
Of the dozens of casual AI bot apps that never rise above the fray, Moltbot is perhaps the most notable for active communication with the user. The Assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts or morning briefings based on calendar events or other triggers. The project has been compared to Jarvis, the AI assistant from the Iron Man films, for its ability to attempt to actively manage tasks in the user’s digital life.
However, we’ll tell you upfront that the still-hobbyist software has a lot of drawbacks: while the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or to use an API key). Users can run local AI models with bots, but they are currently less effective at completing tasks than the best commercial models. Claude Opus 4.5, Anthropic’s flagship large language model (LLM), is a popular choice.

Screenshot of an interaction with Clawdbot/Moltbot taken from its GitHub page.
Credit: Moltbot
Screenshot of an interaction with Clawdbot/Moltbot taken from its GitHub page.
Credit: Moltbot
Installing Moltbot requires configuring a server, managing authentication, and understanding sandboxing for a piece of security in a system that demands access to basically every aspect of your digital life. Heavy usage can drive up significant API costs, as agent systems make many calls behind the scenes and use lots of tokens.
<a href