Anthropic refuses to bow to Pentagon despite Hegseth’s threats

Despite Defense Secretary Pete Hegseth’s ultimatum, Anthropic said it could not “in good conscience” comply with the Pentagon’s order to remove the guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Defense Department threatened to cancel the $200 million contract and label Anthropic a “supply chain risk” if it did not agree to remove safeguards on mass surveillance and autonomous weapons.

“Our strong priority is to continue serving the department and our warfighters — with our two requested safeguards in place,” Amodei said. “We stand ready to continue our work to support the national security of the United States.”

In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on Twitter that he “wants nothing more than to try to personally control the US military and is OK with endangering our nation’s security.”

The standoff began when the Pentagon demanded that Anthropic make its cloud AI product available for “all legitimate purposes” — including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refuses to offer its technology for those things, even with the “security stack” built into that model.

Tomorrow, Axios Hegseth was reported to have given Anthropic a deadline of 5:01 pm on Friday to agree to the Pentagon’s terms. At the same time, the DOD requested an assessment of its reliance on the cloud, an initial step toward potentially labeling Anthropic as a “supply chain risk” — a designation typically reserved for companies from adversaries such as China and “had never been applied to a U.S. company before,” Anthropic wrote.

Amodei declined to change his stance and said that if the Pentagon decided to remove Anthropic, “we will work to enable a smooth transition to another provider while avoiding any disruption to ongoing military planning, operations or other critical missions.” Grok is one of the other providers the DoD is reportedly considering, along with Google’s Gemini and OpenAI.

However, it will not be so easy for the military to separate itself from the cloud. Until now, Anthropic’s model has been the only model allowed for the most sensitive tasks in military intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuela attack in which US forces ousted the country’s President Nicolas Maduro and his wife.

AI companies have been widely criticized for potential harm to users, but the development of mass surveillance and weapons would clearly take this to a new level. Anthropic’s potential response to the Pentagon was seen as a test of its claim to be the most security-leading AI company, especially after abandoning its key security pledge days earlier. Now that Amodei has responded, attention will turn to the Pentagon to see if it follows through on its threats, which could seriously harm Anthropic.



<a href

Leave a Comment