After Anthropic’s week-long standoff with the Pentagon, the company achieved a milestone: A judge granted Anthropic a preliminary injunction in its lawsuit seeking to overturn its government blacklisting while the judicial process continues.
“War Department records show that it designated Anthropic as a supply chain risk because of its ‘hostile approach through the press,'” said Rita F., a district judge in the Northern District of California. Lin wrote in the order, which will take effect in seven days. “Punishing Anthropic for publicly investigating the government’s contracting situation is classic illegal First Amendment retaliation.”
A final decision may take several weeks or months.
“We are grateful to the court for moving quickly, and are pleased that they agree that Anthropic is likely to succeed on the merits,” Anthropic spokeswoman Danielle Cohen said in a statement Thursday. “While this case was necessary to protect Anthropic, our customers, and our partners, we remain focused on working productively with the government to ensure that all Americans can benefit from safe, trusted AI.”
“I think this case touches on an important debate,” Judge Lin said during Tuesday’s hearing. “On the one hand, Anthropic is saying that its AI product, the Cloud, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand, the War Department is saying that military commanders get to decide what is safe for its AI to do.”
On Tuesday, Judge Lynn said, “It is not my role to decide who is right in that debate… The War Department gets to decide which AI products it wants to use and buy. And everyone, including Anthropic, agrees that the War Department is free to stop using the cloud and seek a more permissive AI vendor.” She added, “I see the question in this case as… whether the government has gone beyond this and violated the law.”
It all started with a memo sent by Defense Secretary Pete Hegseth on January 9, calling for “any lawful use” language to be written into any AI services purchase contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, XAI, and Google. Anthropic’s negotiations with the Pentagon lasted several weeks, hinged on two “red lines” for which the company did not want the military to use its AI: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decision-making process). The rollercoaster series of events that followed included a flood of insults on social media, a formal “supply chain risk” designation with the potential to significantly disrupt Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.
With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and is seeking to overturn the supply chain risk designation.
It is rare, and possibly even unheard of, for a US company to be named a supply chain risk, a designation usually reserved for non-US companies that are potentially linked to foreign adversaries. Designating Anthropic as such raised eyebrows across the country and led to bipartisan controversy over concerns that disagreeing with the presidential administration could potentially lead to massive retaliation against businesses in any sector.
Anthropic’s own business has been significantly affected by the designation, according to its court filing, which states that it “has received outreach from numerous external partners … expressing concerns about what was expected of them and their ability to continue working with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate use. Depending on the degree to which the government restricted its contractors’ work with Anthropic, the company alleged that hundreds of millions to billions of dollars in revenue could be at risk.
During Tuesday’s hearing, both companies got a chance to answer Judge Lynn’s questions, which were released in a document a day earlier and hinged on whether Hegseth did not have the authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked in his pre-release questions about the circumstances under which a government contractor could face dismissal for using Anthropic’s technology in its work – for example, “If a contractor of the Department uses cloud code as a tool to write software for the Department’s national security systems, would that contractor face dismissal as a result?”
On Tuesday, the judge also rebuked the War Department for Hegseth’s ex post, which created widespread confusion according to Anthropic’s previous court filings, stating that “effective immediately, any contractor, supplier, or partner that does business with the United States military may not conduct any commercial activity with Anthropic.”
“You’re standing here saying, ‘We said it, but we didn’t really mean it,'” Judge Lynn said during the hearing. Judge Lynn later pressed the question of why Hegseth wrote the above to prevent contractors from working with Anthropic rather than simply designating Anthropic as a supply chain risk.
In a series of questions Tuesday, Judge Lynn asked whether the War Department planned to terminate contractors based on their work with Anthropic if it is separate from their work with the department, and a War Department representative responded, “That is my understanding.”
Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I wouldn’t be fired for using Anthropic – is that correct?” The War Department representative responded, “For non-DOW work, that is my understanding.” But when the judge asked whether a military contractor that provides IT services to the War Department, but not to national security systems, could be terminated for using Anthropic, the War Department representative did not provide a concrete answer.
During the hearing, Judge Lin cited one of the amicus briefs, in which he said the term “attempted corporate murder” was used. He said, “I don’t know if it’s ‘murder’ or not, but it looks like an attempt to cripple Anthropic.”
“We continue to suffer irreparable harm from this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph ex post.
In a recent court filing, the Defense Department alleged that Anthropic could apparently “attempt to disable its technology or alter the behavior of its models before or during ongoing combat operations” if it felt the military was crossing its red lines — a theoretical situation the Pentagon has said it considers an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, asking, “What evidence in the record suggests that Anthropic had continued access to or control over the cloud after it was turned over to the government, such that Anthropic could engage in such acts of sabotage or subversion?”
<a href