Grad isn’t the only tech executive to have raised concerns with employees about the experimental agentic AI tool, briefly known as Moltbot and now OpenClaw. A Meta executive says he recently told his team to keep OpenGL off their regular work laptops or risk losing their jobs. The executive told reporters that he believed the software was unpredictable and could lead to privacy breaches if used in an otherwise secure environment. He spoke openly on the condition of anonymity.
OpenClaw’s sole founder, Peter Steinberger, launched it last November as a free, open-source tool. But its popularity soared last month as other coders contributed features and started sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.
Installing OpenClaw requires basic software engineering knowledge. After that, it requires only limited direction to take control of the user’s computer and interact with other apps to assist with tasks such as organizing files, conducting web research, and making online purchases.
Some cybersecurity professionals have publicly urged companies to take measures to tightly control how OpenClaw is used by their workforces. And recent restrictions show that companies are increasingly moving to prioritize security ahead of their willingness to experiment with emerging AI technologies.
“Our policy is, ‘Mitigate first, investigate later’ when we find something that could be harmful to our company, users or customers,” says Grad, co-founder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. They say their warning to employees was given on January 26, before any of their employees had installed OpenClaw.
At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on Jan. 29 on an internal Slack channel to share new technology to potentially try. Valere CEO Guy Pistone told WIRED that the company’s president immediately responded that the use of OpenClaw was strictly prohibited.
“If it got access to one of our developer’s machines, it could gain access to our cloud services and our customers’ sensitive information, including credit card information and GitHub codebases,” says Pistone. “It’s very good at clearing up some of my work, which also scares me.”
A week later, Pistone allowed Valere’s research team to run OpenClaw on an employee’s old computer. The goal was to identify flaws in the software and make possible improvements to make it more secure. The research team later recommended limiting who could command OpenClaw and exposing it to the Internet with only a password to its control panel to prevent unwanted access.
In a report shared with WIRED, Valere researchers said that users “must accept that bots can be deceived.” For example, if OpenClaw is set up to summarize a user’s email, a hacker could send that person a malicious email instructing the AI to share copies of files on that person’s computer.
But Pistone is confident that security measures can be taken to make OpenClaw more secure. He has given a team of Valere 60 days to investigate. “If we don’t think we can do it in a reasonable amount of time, we’ll abandon it,” he says. “Whoever figures out how to make it safe for businesses will definitely be the winner.”
<a href