Why Your Favorite AI Founders Are Wrong About Ethics and the Pentagon

Why Your Favorite AI Founders Are Wrong About Ethics and the Pentagon

The open letter is the ultimate luxury good of the tech elite. It is a low-cost, high-visibility signal of moral superiority that allows founders and engineers to feel like the protagonists of a sci-fi thriller while they sit in climate-controlled offices in Palo Alto. The recent outcry from tech employees regarding Anthropic’s ties to the Pentagon isn't a brave stand for humanity. It is a fundamental misunderstanding of how the world actually works.

We are watching a generation of builders try to divorce their inventions from the messy, violent reality of geopolitical survival. They want the funding. They want the compute. They want the prestige. But they don't want the blood.

The premise of the letter—that AI companies should remain "neutral" or distance themselves from defense contracts to prevent "harm"—is a fantasy. It ignores the fact that in the current global order, there is no such thing as a neutral technology. If the most advanced systems aren't integrated into the democratic defense apparatus, they will simply be outpaced by systems that are integrated into authoritarian ones.

The Myth of the Clean Hands Coder

Engineers love to believe their code is a sterile abstraction. It isn't. Every line of Python written to optimize a large language model (LLM) is a dual-use asset. The same transformer architecture that helps a teenager write a mediocre essay on The Great Gatsby is the architecture that will eventually coordinate drone swarms or identify vulnerabilities in a power grid.

To pretend otherwise is a form of intellectual dishonesty. I have sat in boardrooms where executives spent four hours debating "safety guardrails" for a chatbot, only to ignore the fact that their primary cloud provider is the backbone of regional surveillance states.

The "Founders and Engineers" signing these letters are late to the party. The infrastructure is already built. The chips are already spinning. The idea that you can "opt-out" of the military-industrial complex while utilizing the very internet and hardware it birthed is a laughable contradiction.

Geopolitical Realism vs. Silicon Valley Idealism

The "lazy consensus" suggests that by refusing to work with the Department of Defense (DoD), tech companies are making the world safer.

Let's dismantle that.

Imagine a scenario where Anthropic, OpenAI, and Google all successfully "de-militarize." They scrub their datasets, they implement "pacifist" RLHF (Reinforcement Learning from Human Feedback), and they refuse to sell a single API key to a three-letter agency.

Does the development of lethal autonomous weapons stop?

No. It accelerates in jurisdictions that don't have a culture of open letters. When Western tech leaders retreat from the defense space, they don't create a vacuum of peace; they create a vacuum of competence. They leave the most dangerous tools in the hands of legacy defense contractors who lack the talent, the speed, and the ethical rigor that these very "concerned engineers" claim to possess.

If you actually care about AI safety, you want the people who understand the risks to be the ones building the targeting systems. You don't want to outsource the "kill chain" to a firm that still thinks COBOL is a modern language.

The Anthropic Paradox

Anthropic was founded on the idea of "Constitutional AI." The goal was to build a system that follows a set of principles to ensure it remains helpful, harmless, and honest.

The signatories of the letter argue that working with the Pentagon violates these principles. This is a category error. A "harmless" AI in the context of a chatbot means it won't tell you how to make a pipe bomb. A "harmless" AI in the context of national security means it prevents the pipe bomb from going off in a crowded subway by identifying the threat before it manifests.

The "harmlessness" of a technology is entirely dependent on its application and the actor wielding it. By attempting to restrict Anthropic’s reach, these employees are effectively saying they trust the abstract concept of "public use" more than the specific, regulated oversight of a democratic government.

History shows us that private sector "neutrality" is often just a mask for profit-seeking without accountability. When you sell to "everyone," you sell to the highest bidder, regardless of their intent. At least with a DoD contract, there is a chain of command and a legal framework.

Why "People Also Ask" Is Asking the Wrong Questions

People often ask: "Should AI be used in warfare?"

That is the wrong question. It’s like asking if electricity should be used in warfare in 1915. It’s an inevitability.

The real question is: "Who do you want setting the standards for AI in warfare?"

If the answer is "no one," you’ve already lost. If the answer is "only the government," you’ve ignored the reality that the private sector owns the talent and the compute. The only viable path is a deep, uncomfortable, and transparent partnership between the tech sector and the defense establishment.

Another common query: "Can AI be regulated to prevent its use in weapons?"

Strictly speaking, no. You cannot regulate a mathematical concept. You can regulate the hardware (which we are doing via export controls on NVIDIA H100s) and you can regulate the deployment. But the "knowledge" of the model is portable. Once the weights are out, the weapon is out.

The signatories of the Anthropic letter are fighting a battle over the intent of the founders, while the utility of the technology has already bypassed them.

The Cost of Moral Purity

There is a high price for the moral purity these employees are demanding.

  1. Atrophy of National Capability: If the best minds in AI refuse to work on defense, the nation's defensive capabilities will stagnate. This isn't just about "winning" a war; it's about deterring one. Weakness is provocative.
  2. The Rise of Shadow AI: Defense agencies won't stop seeking AI. They will just build it in the dark, with less oversight and potentially more dangerous shortcuts.
  3. Hypocrisy as a Business Model: Many of these companies already take money from venture capital firms that are funded by sovereign wealth funds with abysmal human rights records. To draw the line at the Pentagon—an organization dedicated to the defense of the very system that allows these tech companies to exist—is a bizarre form of selective outrage.

Stop Being Afraid of the Mission

I’ve seen this play out before. In the early days of cloud computing, there was a similar uproar. "Don't put government data on our servers," they said. Now, every major cloud provider has a dedicated "GovCloud" because they realized that the security requirements of the state actually made their commercial products better.

The same will happen with AI. The rigors of military-grade AI—reliability, interpretability, and edge-case robustness—will drive the entire industry forward.

If you’re an engineer at Anthropic and you’re genuinely terrified of what you’re building, you shouldn't be writing letters. You should be building the kill-switches. You should be deep in the architecture, ensuring that the "Constitutional AI" you're so proud of is robust enough to handle the pressures of a real-world conflict.

Writing a letter is an exit strategy for your conscience. Building the safety layer is an entry strategy for your responsibility.

The world is not a seminar on ethics. It is a competitive arena where the most capable systems win. You can either be at the table, ensuring those systems reflect your values, or you can be on the sidelines, complaining about the menu while someone else eats your lunch.

Pick a side. Neutrality is just a slower way to lose.

Stop pretending your code doesn't have a side. It does. And if you aren't willing to defend the system that gave you the freedom to write it, you don't deserve the keyboard you're typing on.

Build the tools. Secure the perimeter. Grow up.

KF

Kenji Flores

Kenji Flores has built a reputation for clear, engaging writing that transforms complex subjects into stories readers can connect with and understand.