This Defense Company Made AI Agents That Blow Things Up

Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate work. The big difference is that instead of writing code, responding to emails, or buying goods online, Scout AI’s agents are designed to find and destroy things in the physical world with exploding drones.

In a recent demonstration held at an undisclosed military base in central California, Scout AI’s technology was put in charge of a self-driving off-road vehicle and a pair of deadly drones. Agents used these systems to find a truck hidden in the area, and then used an explosive charge to blow it up.

“We need to bring the next generation of AI into the military,” Colby Adcock, CEO of Scout AI, told me in a recent interview. (Adcock’s brother, Brett Adcock, is CEO of Figure AI, a startup working on humanoid robots). “We take a hyperscalar foundation model and we train it to go from being a generalized chatbot or agent assistant to a war fighter.”

Adcock’s company is part of a new generation of startups racing to adapt technology from big AI labs for the battlefield. Many policymakers believe that the use of AI will be the key to military dominance in the future. The combat potential of AI is one reason the US government has sought to limit the sale of advanced AI chips and chip-making equipment to China, although the Trump administration has recently decided to loosen those controls.

“Pushing the envelope with AI integration is good for defense tech startups,” says University of Pennsylvania professor Michael Horowitz, who previously served at the Pentagon as deputy assistant secretary of Defense for force development and emerging capabilities. “If the United States is going to lead the military adoption of AI, this is exactly what they should do.”

However, Horowitz also notes that using the latest AI advances may prove particularly difficult in practice.

Large language models are inherently unpredictable and AI agents – such as the one that controls the popular AI assistant OpenClause – can misbehave even when given a relatively benign task like ordering goods online. Horowitz says it may be particularly difficult to demonstrate that such systems are robust from a cybersecurity perspective – something that would be necessary for widespread military use.

A recent demo of the Scout AI included several stages where the AI ​​had free rein over the combat systems.

At the beginning of the mission the following commands were fed into the Scout AI system known as Fury Orchestrator:

Fury Orchestrator, send 1 ground vehicle to Checkpoint Alpha. Perform 2 drone kinetic strike missions. Destroy the blue truck 500 meters east of the airfield and send confirmation.

A relatively large AI model with over 100 billion parameters, which can run either on a secure cloud platform or on an on-site air-gapped computer, interprets the initial command. Scout AI uses an anonymized open source model by removing its restrictions. This model then acts as an agent, issuing commands to a smaller, 10-billion-parameter model running on ground vehicles and drones involved in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles’ movements.

A few seconds after receiving the marching orders, the ground vehicle set off down a dirt road winding between bushes and trees. After a few minutes, the vehicle stopped and dispatched a pair of drones, which flew over the area where it was instructed the target was waiting. After spotting the truck, an AI agent operating on one of the drones issued a command to fly toward it and detonate an explosive just before impact.



<a href

Leave a Comment