I vented my frustration last week when the Trump administration labeled Anthropic a “supply chain risk” โ a designation normally reserved for companies with ties to foreign adversaries like China. Here’s what’s transpired since.
Anthropic filed two federal lawsuits this morning โ one in the U.S. District Court for the Northern District of California, and one in the U.S. Court of Appeals for the D.C. Circuit. The company is challenging the Pentagon’s decision to designate it a “supply chain risk,” a label that effectively bars any defense contractor from doing business with Anthropic.
The legal arguments matter more than the politics. Here’s why.
The Designation Was Designed for Foreign Adversaries
The supply chain risk statute exists for a specific reason: to protect the defense industrial base from sabotage, backdoors, and espionage โ primarily from companies with ties to hostile foreign governments. It has never been used against an American company.
Anthropic’s filing argues that the statute was applied outside its intended scope. The company isn’t accused of embedding vulnerabilities in its software. It isn’t accused of sharing classified data with foreign governments. It isn’t accused of any technical failure or security breach. Its offense was telling the Pentagon it wouldn’t remove two contractual restrictions from a $200 million deal: no mass domestic surveillance of Americans, and no fully autonomous weapons.
That’s a contract negotiation disagreement, not a supply chain risk.
The First Amendment Argument
The more significant claim is constitutional. Anthropic argues the designation is retaliation for protected speech โ specifically, the company’s public advocacy for AI safety guardrails.
This is where the government’s own record works against it. Trump called Anthropic “Leftwing nut jobs” on Truth Social. Hegseth accused the company of “arrogance and betrayal” and being “cloaked in the sanctimonious rhetoric of ’effective altruism.’” These aren’t the words of officials making a dispassionate national security determination. They’re the words of officials punishing a company for its publicly stated beliefs.
When the government creates an extensive public record of ideological hostility toward a company and then uses an emergency national security tool against that same company, courts tend to notice the pattern. The travel ban litigation taught us that much.
The Industry Knows This Is Wrong
Last week the Information Technology Industry Council โ whose members include NVIDIA, Google, Microsoft, Apple, and Amazon โ sent a letter to Hegseth expressing concern. Their language was measured but pointed: “Emergency authorities such as supply chain risk designations exist for genuine emergencies and are typically reserved for entities that have been designated as foreign adversaries.”
When every major tech company in the country โ including Anthropic’s direct competitors โ tells the government it’s misusing an emergency authority, that should carry weight.
Sam Altman, CEO of OpenAI and Anthropic’s most direct competitor, stated publicly that enforcing the supply chain risk designation “would be very bad for our industry and our country.” OpenAI itself renegotiated its own Pentagon contract to add explicit protections against domestic surveillance โ the very restriction Anthropic was punished for insisting on.
The Irony the Court Will See
Here’s the fact that undermines the government’s entire position: the Pentagon is still using Claude.
According to multiple reports, Anthropic’s AI continues to operate on classified Pentagon systems, including in active operations in the Middle East. The technology the President called dangerous enough to warrant a government-wide ban is simultaneously being relied on for combat intelligence.
You cannot credibly argue that a product is a supply chain risk while you’re actively depending on it in theater. That contradiction will be difficult to explain to a federal judge.
What the Lawsuits Don’t Say
Anthropic isn’t asking to dictate military policy. The company has offered to continue negotiating and even offered to help the Pentagon transition to a different AI provider during the dispute. This is a company trying to protect its business and its right to set product terms โ the same right every defense contractor exercises when it defines what it will and won’t sell.
The legal representation tells its own story. Anthropic retained WilmerHale, one of the law firms Trump previously targeted with executive orders threatening their security clearances. WilmerHale challenged those orders and won. They know how this administration operates in court.
Why This Matters Beyond Anthropic
If this designation stands, the precedent is simple: any company that publicly disagrees with a Pentagon policy can be labeled a supply chain risk and cut off from the defense market. That’s not a national security tool โ it’s an economic weapon aimed at speech.
I wrote in February that the real problem was the absence of law governing military AI use. That’s still true. But today we have something we didn’t have then: the question is now in front of federal judges instead of being decided by social media posts and Friday deadlines.
The courts are where this always needed to end up. I’m glad it’s finally there.
