Why the Pentagon can’t quit Anthropic Claude even during a war with Iran

Why the Pentagon can’t quit Anthropic Claude even during a war with Iran

War doesn't wait for paperwork. Last Friday, President Trump hopped on Truth Social to blast Anthropic as a "Radical Left AI company." He ordered every federal agency to stop using their Claude model immediately. Defense Secretary Pete Hegseth went a step further, labeling the startup a "supply chain risk"—a tag usually reserved for Chinese firms like Huawei. But just hours later, as American and Israeli jets began a massive bombardment of Iran, those same military leaders were leaning on Claude to pick their targets.

It’s a bizarre contradiction that shows how deeply AI has already burrowed into the machinery of modern combat. You can’t just flip a switch and go back to paper maps and human-only intelligence when you’re trying to coordinate strikes against 1,000 targets in 24 hours. The Pentagon is effectively at war with its own primary intelligence tool while using it to fight a real war in the Middle East.

The secret role of Claude in Project Maven

If you're wondering how a "safe" AI from San Francisco ended up in a cockpit over Tehran, the answer is Palantir. Anthropic doesn't just sell chat subscriptions; its Claude model is the brain inside Palantir’s Maven Smart System. This is the Pentagon's premier AI-powered targeting platform.

During the opening wave of the Iran campaign, Maven didn't just suggest a few spots to hit. It processed mountains of satellite data, intercepted signals, and drone feeds to identify hundreds of Iranian military objectives. It provided precise coordinates and prioritized them based on their strategic value. According to reports from The Washington Post, Claude turned what used to be weeks of battle planning into operations that happen in real-time.

The military isn't just using it for targeting, either. Commanders are using Claude to run "what-if" simulations on the fly. If we hit this radar site, what’s the likelihood of an immediate missile reprisal? Claude handles the math. It’s the difference between guessing and moving with data-backed confidence.

Why the Anthropic feud turned toxic

The fight between Anthropic CEO Dario Amodei and the Trump administration isn't about the technology's performance. Claude works—too well, maybe. The fallout happened because Anthropic tried to set "red lines."

Amodei refused to sign a contract that would allow the military to use Claude for "any lawful purpose." Specifically, Anthropic wanted a guarantee that its AI wouldn't be used for two things:

  1. Fully autonomous weapons systems (the "killer robot" scenario).
  2. Mass surveillance of American citizens.

The Pentagon’s response was essentially "trust us." Emil Michael, the under-secretary of defense, argued that internal policies already restrict those activities and that a private company shouldn't be dictating terms to the U.S. military. When Anthropic wouldn't budge, Hegseth and Trump decided to make an example of them. They want companies that "bow down," as former military AI lead Jack Shanahan put it.

The OpenAI pivot and the six month trap

While Anthropic is being dragged through the mud, OpenAI is moving into the vacant space. Sam Altman’s team recently signed a deal to put their models on classified networks. They claim to have guardrails too, but they were clearly willing to play ball where Anthropic wasn't. This has triggered a massive backlash from users—uninstalls of the ChatGPT app reportedly spiked nearly 300% after the news broke.

But here’s the reality: the Pentagon has given itself six months to phase out Anthropic. Why six months? Because they literally cannot function without it right now. The integration with Palantir and other defense contractors is so deep that ripping it out mid-war would be a tactical disaster.

Digital intelligence has no loyalty

The Iran strikes prove that AI-driven warfare is no longer a future concept. It is the current reality. We're seeing "speed of thought" bombing runs where the bottleneck isn't how fast a jet can fly, but how fast a human can click "approve" on a list of targets generated by an algorithm.

Ethical questions are piling up. Experts are worried that human review of these AI-generated lists is becoming perfunctory. If an AI hands you 500 targets and tells you they’re all valid, are you really checking the legal justification for each one? Probably not.

If you're following this, watch the courtrooms next. Anthropic is challenging that "supply chain risk" designation in court. They're arguing that a contract dispute isn't the same as being a national security threat. In the meantime, the very people calling them a threat will keep using their code to fight a war.

If you want to understand where this is headed, pay attention to the upcoming procurement rounds for "agentic AI." The military isn't just looking for chatbots; they want autonomous agents that can manage logistics and electronic warfare. The fight over who builds those brains—and what rules they follow—is just getting started.

LY

Lily Young

With a passion for uncovering the truth, Lily Young has spent years reporting on complex issues across business, technology, and global affairs.