I use Claude every day. It helps me write code, draft documents, analyze problems, and think through decisions. It’s the most useful tool I’ve added to my workflow in thirty years of IT work. So when I see the Secretary of Defense threatening to destroy the company that makes it because they won’t remove the safety limits on their AI, I pay attention.
Here’s what’s happening. Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei at the Pentagon today and gave him until 5:01 PM Friday to agree to let the military use Claude for “all lawful uses”—no restrictions, no guardrails, no questions asked. If Anthropic doesn’t comply, the Pentagon will cancel its $200 million contract, designate the company a “supply chain risk” (effectively a government blacklist), and potentially invoke the Defense Production Act—a 1950s Cold War law—to compel access.
Anthropic has two non-negotiable positions: no fully autonomous weapons that select and engage targets without human oversight, and no mass domestic surveillance of Americans. That’s it. They’re not refusing to work with the military—Claude was already used during the Maduro raid in Venezuela through a Palantir partnership. They support missile defense, intelligence analysis, logistics, and other military applications. They just won’t build a system that decides who to kill on its own.
The Pentagon’s response? Get on board or get destroyed.
This Isn’t Free Enterprise
Under Secretary Emil Michael has argued that it’s “not democratic” for Anthropic to limit how the military uses Claude. His reasoning: rules should be set by the President, Congress, and federal agencies—not individual companies.
That framing is backwards. A private company deciding what products it will and won’t sell is the most basic expression of free enterprise. A defense secretary threatening to invoke Cold War compulsion laws to force a company to remove safety features from its product is the opposite of free enterprise. That’s coercion.
Imagine a gun manufacturer that voluntarily decides not to sell a particular weapon to a particular buyer. You might disagree with the decision, but nobody would argue the government should compel the sale. Now replace “gun manufacturer” with “AI company” and the principle evaporates?
The irony is thick. The same political movement that spent the last decade arguing that tech companies should be free to set their own policies—no government interference, no regulation, let the market decide—is now arguing that the government should force a tech company to change its product because the government wants it. The principle was never “free enterprise.” It was “do what we say.”
The Question Nobody Wants to Answer
Set aside the coercion for a moment. Set aside the politics. Let’s engage with the actual question. Should we build AI systems that autonomously decide who to kill?
The Pentagon frames this as a readiness issue. We need every tool available. Our adversaries won’t limit their AI. If we handicap ours, we lose.
But the people making this argument aren’t addressing the fundamental problem: AI doesn’t know who the good guys are.
Right now, today, an AI model can’t reliably distinguish between a combatant and a civilian, between a hostile actor and an allied soldier, between a threat and a person holding a cell phone. These systems hallucinate. They misidentify. They lack the contextual judgment that a human operator brings—imperfect as that is.
And here’s the question that really bothers me. Suppose we build a fully autonomous weapons system and hand it to this administration. The AI learns: these are the allies, these are the adversaries, this is the mission. Then inauguration day comes. A new president takes office with a different foreign policy, different alliances, different threat assessments. Does the AI just flip? Does it recalibrate its understanding of who’s a threat based on the political calendar?
Or worse—what if it doesn’t flip? What happens when an autonomous weapons system trained under one administration’s threat model is still operating under the next? The weapon doesn’t care about elections. It doesn’t watch inaugural addresses. It has a model of the world that was defined by the people who configured it, and those people might not be in charge anymore.
This isn’t hypothetical hand-wringing. The United States has changed its foreign policy toward specific countries multiple times within a single presidential term. We’ve gone from diplomatic engagement to confrontation with nations in the span of months. An autonomous weapons system doesn’t absorb nuance. It has targets and non-targets, and if the political reality shifts faster than the system gets reconfigured, people die.
What Could Go Wrong? Almost Everything.
The Lawfare Institute nailed the structural problem: “The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO, with no democratic input and no durable constraints.”
There is no law governing autonomous weapons use by the U.S. military. There is no law governing how AI can be used for mass surveillance of American citizens. Congress hasn’t passed anything. No framework exists that survives the next change of administration. The Pentagon isn’t asking for authority under a legal structure—they’re demanding a blank check and threatening the company that won’t hand it over.
NYU’s Stern Center for Business and Human Rights published an analysis warning that if the government punishes companies for maintaining ethical guardrails, “it sends a clear message to the entire industry: responsibility is a liability.” That’s not a hypothetical—xAI signed a deal this week to let the military use Grok with zero restrictions. OpenAI and Google have already agreed to the “all lawful uses” standard. If Anthropic is destroyed for holding the line, no company will hold the line again.
Here’s a non-exhaustive list of what keeps AI safety researchers awake:
- Misidentification. AI systems misidentify targets at rates that would be unacceptable for any human decision-maker. An autonomous weapon that gets it wrong doesn’t get court-martialed. It gets retrained.
- Adversarial manipulation. AI systems can be fooled. Adversaries can spoof signals, manipulate sensor data, or exploit model weaknesses to cause an autonomous system to misidentify targets. This isn’t theoretical—adversarial attacks on image recognition systems are well-documented.
- Escalation dynamics. Autonomous systems can act faster than humans can deliberate. If both sides deploy autonomous weapons, you get machines making lethal decisions at machine speed with no human in the loop to pause, question, or de-escalate. The Cuban Missile Crisis was resolved because humans had time to think. Machines don’t take time.
- Accountability vacuum. When an autonomous weapon kills the wrong people—and it will—who is responsible? The AI doesn’t face prosecution. The commander who authorized its deployment will argue they relied on the technology. The company that built it will point to the contract terms. The accountability disappears into a bureaucratic void.
- Mission creep. Surveillance tools built for battlefield intelligence get repurposed for domestic monitoring. It has happened with every surveillance technology the government has ever deployed. The NSA’s post-9/11 surveillance apparatus was built for counterterrorism and ended up sweeping up Americans’ phone records. There is no reason to believe AI surveillance will be different.
The Real Problem Is the Absence of Law
I don’t think Anthropic should be the entity making national security policy. I agree with the Lawfare analysis on that point. But the solution isn’t to strip a private company of its right to set product limitations. The solution is for Congress to do its job.
Pass a law. Define what autonomous weapons can and can’t do. Set rules for military AI use that apply to every company, every administration, and every future conflict. Create an oversight structure with teeth. Do the hard, boring, politically difficult work of legislating instead of relying on a defense secretary to bully individual companies into compliance.
Until that happens, Anthropic’s two red lines—no autonomous targeting without human oversight, no mass domestic surveillance—aren’t just reasonable. They’re the only guardrails that exist.
I use Claude every day. I rely on it. I don’t want to see the company that builds it put in a position where it has to choose between its principles and its survival. No company should have to make that choice because a cabinet secretary issued a Friday deadline.
Anthropic didn’t build safety guardrails because they’re naive. They built them because they understand what they’ve created. Until Congress does its job and passes actual legislation, those guardrails are all we’ve got.
