A federal court just handed Anthropic a temporary win. Judge Rita F. Lin of the U.S. District Court for the Northern District of California issued a preliminary injunction that pauses the governments effort to block the company while the lawsuit moves forward. The order becomes effective in seven days.
What the judge said
Judge Lin wrote that the governments own records show Anthropic was labeled a "supply chain risk" partly because of its "hostile manner through the press." She wrote, "Punishing Anthropic for bringing public scrutiny to the governments contracting position is classic illegal First Amendment retaliation."
Why Anthropic sued
Anthropic asked the court to undo the supply chain risk designation. The company says the designation was retaliatory and violated its First Amendment rights after it publicly set limits on how its AI can be used. Anthropic has said it will not allow its Claude model to be used for domestic mass surveillance or for lethal autonomous weapons, meaning systems that can choose to kill without a human decision maker.
Key facts Anthropic raised in court
- Defense Secretary Pete Hegseth sent a memo on January 9 requiring an "any lawful use" clause in AI service contracts within 180 days.
- Negotiations between Anthropic and the military centered on two red lines: domestic mass surveillance and lethal autonomous weapons.
- Anthropics filings say the designation prompted confusion among partners and potential business losses that could total hundreds of millions to multiple billions of dollars.
How the Department of War responded
The Department of War argued to the court that Anthropics restrictions create unacceptable national security risks. Among the scenarios the department raised was the possibility that Anthropic could alter or disable its model during combat operations. Judge Lin pressed the department for evidence that Anthropic would retain operational control over the model after it was delivered to the government.
What went down in court
During the hearing, Judge Lin questioned why a public post from Secretary Hegseth effectively told contractors they could not work with Anthropic. The post stated, in effect, that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The judge challenged the Department of Wars representative on whether contractors would be terminated for working with Anthropic outside of their government work.
The departments representative said that non-Department work would likely not lead to termination, but gave no clear answer about some IT contractors who supply services to the military but not to national security systems. Judge Lin also referenced an amicus brief that used the phrase "attempted corporate murder," and observed that while it might be strong language, the actions at issue looked like an effort to severely harm Anthropics business.
Why this matters
The supply chain risk label is typically reserved for foreign companies with ties to adversaries. Using it against a U.S. AI firm is rare and sparked bipartisan concern that public disagreement with the government could bring severe consequences for private businesses.
Anthropic says the designation has already damaged relationships with partners and put substantial revenue at risk. The company also told the court it continues to suffer "irreparable" harm from the directive that discouraged commerce with it.
Anthropic spokesperson Danielle Cohen said the company is grateful the court moved quickly and that the court agreed Anthropic is likely to succeed on the merits. She added that Anthropic aims to keep working productively with the government to ensure safe, reliable AI for all Americans.
The preliminary injunction only pauses the designation. A final judgment could still take weeks or months as the lawsuit proceeds.