Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo

Anthropic “is not A U.S. appeals court in Washington, D.C., “met the stringent requirements” to temporarily lose a supply-chain-risk designation imposed by the Pentagon, it ruled Wednesday. The decision contrasts with one issued last month by a lower court judge in San Francisco, and it was not immediately clear how the conflicting initial rulings would be resolved.

The government has sanctioned Anthropic under two different supply-chain laws with similar effects, and San Francisco and Washington, D.C., courts are ruling on only one of them. Anthropic said it is the first US company to be designated under the two laws, which are commonly used to punish foreign businesses that pose a threat to national security.

“The pause would force the United States military to proceed with its deal with an unsolicited vendor of critical AI services amid a significant ongoing military conflict,” the three-judge appellate panel wrote Wednesday. The panel said that while Anthropic could suffer financial loss from the ongoing designation, they did not want to risk “substantial judicial imposition on military operations” or “lightly override” the military’s decisions on national security.

A San Francisco judge had found that the Defense Department had likely acted in bad faith against Anthropic, motivated by frustration over the AI ​​company’s proposed limits on use of its technology and public criticism of those restrictions. The judge ordered the removal of the supply-chain risk label last week, and the Trump administration complied by restoring access to anthropic AI tools inside the Pentagon and the rest of the federal government.

Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, D.C., court “recognized that these issues need to be resolved quickly” and is confident that “the courts will ultimately agree that these supply chain designations were unlawful.”

The Defense Department did not immediately respond to a request for comment, but Acting Attorney General Todd Blanch posted a statement on Twitter. “Today’s ruling in the D.C. Circuit allowing the government to designate Anthropic as a supply-chain risk is a resounding victory for military preparedness,” he wrote. “Our position has been clear from the beginning – our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems. Military authority and operational control belong to the Commander-in-Chief and the War Department, not to any technology company.”

These cases are testing how much power the executive branch has over the conduct of tech companies. There is also a battle going on between Anthropic and the Trump administration as the Pentagon deploys AI in the war against Iran. The company has argued that it is being illegally punished for insisting that its AI tools in the cloud lack the accuracy needed for certain sensitive missions, such as carrying out deadly drone strikes without human supervision.

Several experts on government contracting and corporate rights told WIRED that Anthropic has a strong case against the government, but courts sometimes refuse to overrule White House decisions on matters related to national security. Some AI researchers have said that the Pentagon’s actions against Anthropic “silence professional debate” about the performance of AI systems.

Anthropic has claimed in court that it has suffered business losses because of the designation, with government lawyers barring the Pentagon and its contractors from using the company’s cloud AI as part of military projects. And as long as Trump remains in power, Anthropic will not be able to regain its significant hold in the federal government.

The final decision in the company’s two lawsuits may take several months. The Washington court is scheduled to hear oral arguments on May 19.

The parties have so far disclosed minimal details about how the Defense Department has actually used the cloud or how much progress it has made in transitioning employees to other AI tools from Google DeepMind, OpenAI or others. The Army, which calls itself the War Department under President Trump, has said it has taken steps to ensure that Anthropic cannot intentionally try to harm its AI equipment during the transition.

UPDATE 4/8/26 7:27 EDT: This story has been updated to include a statement from Acting Attorney General Todd Blanch.



<a href

Leave a Comment