Anthropic’s Claude Takes Control of a Robot Dog

More like robots Starting to appear in warehouses, offices, and even people’s homes, the idea of ​​hacking large language models into complex systems sounds like a science-fiction nightmare. So, naturally, the Anthropic researchers were curious to see what would happen if the cloud tried to control a robot — in this case, a robot dog.

In a new study, Anthropic researchers found that the cloud was able to automate most of the tasks involved in programming a robot and getting it to perform physical tasks. On one level, their findings demonstrate the agentic coding capabilities of modern AI models. On the other hand, they hint at how these systems may begin to expand into the physical realm as models master more aspects of coding and also become better at interacting with software and physical objects.

“We suspect that the next step for AI models is to reach out into the world and impact the world more broadly,” Logan Graham, a member of Anthropic’s red team who studies models for potential risks, tells WIRED. “This will really require models to interface more with robots.”

Courtesy of Anthropic

Courtesy of Anthropic

Anthropic was founded in 2021 by former OpenAI employees who believed that AI could become problematic—even dangerous—as it advances. Today’s models aren’t smart enough to take full control of the robot, Graham says, but future models could be. He says studying how people leverage LLM to program robots could help the industry prepare for the idea of ​​”models eventually being self-absorbed,” referring to the idea that AI could someday operate physical systems.

It’s still unclear why an AI model would decide to take control of a robot – let alone do anything malevolent with it. But speculating about worst-case scenarios is part of Anthropic’s brand, and it helps establish the company as a major player in the responsible AI movement.

In an experiment called Project Fetch, Anthropic asked two groups of researchers without previous robotics experience to take control of a robotic dog, the Unity Go 2 quadruped, and program it to perform specific activities. Teams were given access to a controller, then asked to complete increasingly complex tasks. One group was using the cloud’s coding model – the other was writing code without AI assistance. The group using the cloud was able to complete some—though not all—tasks faster than the human programming-only group. For example, it was able to make the robot move around and find a ball on the beach, something the human-only group could not understand.

Anthropic also studied the dynamics of the collaboration by recording and analyzing the interactions of both teams. They found that the group without access to the cloud demonstrated more negative emotions and confusion. This may be because the cloud made it faster to connect to the robot and coded an easier-to-use interface.

Courtesy of Anthropic

The Go2 robot used in Anthropic’s experiments cost $16,900 – relatively cheap by robot standards. It is commonly deployed to perform remote inspections and security patrols in industries such as construction and manufacturing. The robot is capable of moving autonomously but typically depends on high-level software commands or a person operating the controller. Go2 is made by Unitree, which is based in Hangzhou, China. According to a recent report from Semianalysis, its AI systems are currently the most popular on the market.

The large language models that power ChatGPT and other clever chatbots typically generate text or images in response to a prompt. Recently, these systems have become adept at generating code and operating software – turning them into agents rather than mere text-generators.



Leave a Comment