“The First Amendment does not license the government to unilaterally impose contract terms, and Anthropic does not cite anything to support such a radical conclusion,” U.S. Justice Department lawyers wrote.
The response was filed in a federal court in San Francisco, one of two places where Anthropic is challenging the Pentagon’s decision to approve the company with a label that could bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues that the Trump administration overstepped its authority in imposing the label and preventing the company’s technologies from being used inside the department. If the designation stands, Anthropic could lose up to billions of dollars in expected revenue this year.
Anthropic plans to resume business as usual until the lawsuit is resolved. Judge Rita Lynn, who oversees the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic’s request.
Justice Department lawyers, writing for the Defense Department and other agencies in a Tuesday filing, called Anthropic’s concerns about potentially losing business “legally insufficient to amount to irreparable harm” and asked Lin to deny relief to the company.
The lawyers also wrote that the Trump administration was motivated to take action because of “concerns about Anthropic’s potential future conduct when maintaining access” to government technology systems. “No one has expressed any intention to restrict Anthropic’s expressive activity,” he wrote.
The government argues that Anthropic’s pressure to limit how the Pentagon could use its AI technology led Defense Secretary Pete Hegseth to “reasonably” determine that “Anthropic employees could sabotage, maliciously initiate unwanted actions, or otherwise destroy the design, integrity, or operation of the national security system.”
The Department of Defense and Human Rights are fighting over potential restrictions on the company’s cloud AI models. Anthropic believes that its models should not be used to facilitate mass surveillance of Americans and that they are not currently reliable enough to pilot fully autonomous weapons.
Multiple legal experts previously told WIRED that Anthropic has a strong argument that the supply-chain measures amount to illegal retaliation. But courts often support the government’s national security arguments, and Pentagon officials have described Anthropic as a contractor that has gone rogue and cannot be trusted with its technologies.
“In particular, the DoW became concerned that allowing Anthropic continued access to the DoW’s technical and operational warfare infrastructure would create unacceptable risk to the DoW supply chain,” Tuesday’s filing said. “AI systems are extremely sensitive to manipulation, and Anthropic may attempt to disable its technology or alter the behavior of its models before or during ongoing combat operations, if Anthropic—at its discretion—feels that its corporate ‘red lines’ are being crossed.”
The Defense Department and other federal agencies are working to replace Anthropic’s AI tools with products from competing tech companies over the next few months. The military’s most frequent use of the cloud is through Palantir data analysis software, people familiar with the matter told WIRED.
In Tuesday’s filing, lawyers argued that the Pentagon “cannot flip a switch at a time when Anthropic is the only AI model currently approved for use” in the department’s “classified systems and ongoing high-intensity combat operations.” The department is working on deploying AI systems from Google, OpenAI and xAI as alternatives.
Several companies and groups, including AI researchers, Microsoft, a federal employee labor union and former military leaders, have filed court filings in support of Anthropic. No application has been filed in support of the government.
Anthropic has time till Friday to file a counter reply to the government’s arguments.
<a href