“Effective immediately, any contractor, supplier, or partner that does business with the United States military may not conduct any business activity with Anthropic,” Hegseth wrote in a social media post.
The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military could use the startup’s AI models. In a blog post this week, Anthropic argued that its contracts with the Pentagon should not allow it to use its technology for large-scale domestic surveillance of Americans or fully autonomous weapons. The Pentagon requested Anthropic to agree to let the US military apply its AI for “all legitimate uses” without any special exceptions.
The supply chain risk designation allows the Pentagon to restrict or exclude certain vendors from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control or influence. Its purpose is to protect sensitive military systems and data from potential compromise.
Anthropic responded in another blog post Friday evening, saying it would “challenge any supply chain risk designation in court” and that such a designation would “set a dangerous precedent for any U.S. company interacting with the government.”
Anthropic said it has not received any direct communication from the Defense Department or the White House regarding negotiations over the use of its AI models.
“Secretary Hegseth has stated that this designation would prohibit anyone doing business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to endorse this statement,” the company wrote.
The Pentagon declined to comment.
“This is the most shocking, damaging, and over-the-top thing I’ve seen the United States government do,” says Dean Ball, a senior fellow at the Foundation for American Innovation and former senior policy adviser on AI at the White House. “We have essentially sanctioned an American company. If you’re an American, you should think about whether you should be here 10 years from now.”
People in Silicon Valley expressed similar shock and dismay on social media. Paul Graham, founder of startup accelerator Y Combinator, said, “The people running this administration are impulsive and vindictive. I believe that’s enough to explain their behavior.”
Boaz Barak, a researcher at OpenAI, said in a post that “Deceiving one of our leading AI companies is right about our worst goal. I have great hope that calmer minds will prevail and this announcement will be overturned.”
Meanwhile, OpenAI CEO Sam Altman announced Friday night that the company has reached an agreement with the Department of Defense to deploy its AI models in classified environments. “Our two most important security principles are prohibitions on domestic mass surveillance and humanitarian responsibility for the use of force, including autonomous weapons systems,” Altman said. “DoW agrees with these principles, reflects them in law and policy, and we include them in our agreement.”
confused customer
In its Friday blog post, Anthropic said the authorization, a supply chain risk designation under 10 USC 3252, applies only to Defense Department contracts directly with suppliers, and does not cover how contractors use their cloud AI software to serve other customers.
Three experts on federal contracts say it’s impossible to determine at this point which Anthropic customers, if any, should now cut ties with the company. Alex Major, a partner at law firm McCarter & English who works with tech companies, says Hegseth’s announcement “is not mired in any law that we can understand right now.”
<a href