Axios reported Monday that Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for what a senior Defense official described as a "sh*t-or-get-off-the-pot meeting." The reason for the pressure is that Anthropic's Claude is the only Ai model approved for use at the Department of War, but it has refused to lift its ethical safeguards in ways the Pentagon demands.
The Pentagon wants what it calls access for "all lawful uses." Anthropic is willing to loosen restrictions but insists on maintaining two red lines: no mass surveillance of American citizens, and no weapons systems that fire without human involvement. The Pentagon has threatened to declare Anthropic a "supply chain risk," which would not only void its contracts but effectively blacklist Anthropic technology from any workflow touching the Defense Department.
The framing on both sides is wrong. The repercussions of getting this wrong will impact using AI in governments for decades to come. How we frame this dispute will determine whether we get to a principled resolution or a dangerous one.
The Pentagon's demand for access to "all lawful uses" sounds reasonable until you realize what it actually means. It means the Pentagon wants the moral ceiling to be whatever Congress has authorized and courts have not yet struck down. But legal does not mean moral.
AI is too important a technology to simply treat it as if it is morally neutral like a hammer. It can make decisions, and those decisions affect human lives. when AI is incorporated autonomously in weapons systems, for example, humans lose all responsibility for murder. That may be technically legal but it is far from ethical.
An AI system designed to operate at the ceiling of current legality will, by design, operate right at the edge of whatever political winds currently allow — with no buffer, no friction, no institutional conscience. Legality is a floor, not a ceiling, and treating it as the latter is precisely how institutions drift into abuses that future generations will look back on with horror.
The Pentagon is also being strategically shortsighted. If its own argument is that Anthropic's constraints are "unduly restrictive," the burden is on the Pentagon to specify what legitimate operational need those constraints prevent. "All lawful uses" is not a specification. It is a power grab dressed in procedural language.
Anthropic's position is not wrong, but it is incomplete in a way that is making the problem worse.
"We refuse to enable mass surveillance of Americans" is a principled stance. But if that is the entirety of the position, it creates an operational nightmare for legitimate defense use, and it invites exactly the kind of frustration the Pentagon is expressing. A vendor-by-vendor permission slip regime for individual use cases does not scale across a military enterprise. That is a real problem.
The deeper issue is this: Anthropic's safeguards are largely in implementation — in deployment constraints, operator agreements, and policy guardrails — not deeply baked into the model itself. That means they are, to a significant degree, negotiable under sufficient pressure, bypassable through clever system design, and vulnerable to erosion over time. A red line that only holds when the other party respects it is not a red line. It is a preference.
And there is a strategic problem Anthropic cannot ignore: if it says no without offering a workable alternative, the Pentagon will rationally accelerate its search for replacements. These might include open source models, or even foreign-trained models with no safeguards whatsoever. The very outcome Anthropic is trying to prevent becomes more likely the more inflexible it is in negotiation.
Underneath this dispute is a question neither side is addressing directly: what is the appropriate institutional architecture for governing AI in national security contexts?
This is not primarily an AI ethics question. It is a governance question. And the answer cannot come from Anthropic alone, nor from the Pentagon alone. Both are the wrong institution to resolve it unilaterally.
Anthropic deciding what the U.S. military can use its technology for is a form of private technocratic governance that has no democratic legitimacy. A company — even a well-intentioned one — should not be in the business of making constitutional determinations about surveillance or rules of engagement. That is what courts, legislatures, and executive oversight mechanisms are for.
But the Pentagon demanding "all lawful uses" as a blank check is equally problematic. It means the only constraint on AI-enabled military capability is whatever the current administration has not yet been stopped from doing. That is a recipe for exactly the kind of incremental erosion that makes permanent damage inevitable.
Perhaps the best analogy, although far from perfect, is military dogs. There are very specific regulations on how dogs are used for military purposes, and some cases include attacking. But even in those cases, the handler is the one responsible for making the decision of what the dog can or cannot do, and he or she is responsible for any mistakes. The dog, no matter how clever, is never given agency, .
A human must similarly be making the decisions for anything an AI can do which can cause harm to people.
What a Good Solution Would Look Like
The parties are treating this as a binary: Anthropic either lifts its safeguards or loses the contract. But there is a third option, and it is the only one that makes long-term sense.
Build the governance architecture that makes the red lines operational, scalable, and enforceable — and bake it into U.S. policy.
Concretely, this would mean:
Pre-approved mission categories rather than case-by-case approvals. Intelligence analysis, logistics optimization, cyber defense, translation, planning support — these can be approved in advance without Anthropic having to vet each individual request. The friction point is approval architecture, not the underlying values.
Mandatory human accountability chains. For any lethal or rights-affecting decision, there must be an identifiable human who is legally and institutionally responsible. Not "the model recommended it." A person. This is not a technical constraint — it is a structural requirement that can be built into deployment protocols. And in the absence of such accountability, the legal and moral responsibility goes to the level of the person who approved the software and the policy to begin with. Not too many leaders want that kind of responsibility.
Audit trails and drift monitoring. Not just "did this specific case comply?" but "is the pattern of use over time moving toward or away from the stated constraints?" Erosion happens gradually. The only way to catch it is to watch for it systematically.
Defined civilian protection constraints for autonomous systems. The question of whether AI can execute within a bounded battlefield engagement zone — without human approval for each individual action — is no longer black and white. If a human commander defines the engagement envelope, retains override authority, and is accountable for outcomes, that may be defensible. But those conditions have to be encoded in the system, not just promised in a briefing.
Meaningful judicial and congressional oversight for any domestic-facing use. "Mass surveillance" is too broad a term to be useful. Bulk metadata collection for counterterror purposes under judicial warrant is different from continuous population-level behavioral modeling for political risk assessment. The law already tries to make these distinctions. AI deployment should reinforce those distinctions, not sidestep them. This is implemented in policy and control systems, not in the AI logic itself.
Transparency about the standards themselves. If Anthropic is going to hold lines — and it should — the criteria for where those lines are must be public, explicit, and reviewable. "We know it when we see it" is not a governance framework. Neither side gets to operate in opacity.
The strongest argument for the Pentagon's position is also the one least carefully made: if China is deploying AI in weapons systems with no ethical constraints, the United States cannot afford to fight with one hand tied behind its back.
This argument deserves a serious answer.
National defense is a moral obligation, not merely a political preference. The preservation of a society that can maintain ethical governance is itself an ethical imperative. So the China comparison is not irrelevant — if adversaries develop unconstrained AI-enabled weapons and the U.S. does not, that creates a real capability gap with real consequences for real people.
But "our adversaries do it" does not automatically sanctify every countermeasure. History is full of examples of nations winning military contests while losing something more fundamental — the institutional integrity, civic trust, and moral coherence that make a society worth defending in the first place. Domestic mass surveillance infrastructure, once built, does not typically get dismantled when the emergency passes. The Patriot Act is not a historical curiosity.
The correct response to China's unconstrained AI deployment is not to abandon constraints. It is to build the governance architecture that allows robust capability with maintained accountability. Those are not mutually exclusive goals. They only seem that way when the negotiation is structured as a power contest rather than a design problem.
Here is the standard I would propose: AI should be refused — by companies, by institutions, by policy — when its use would systematically destroy the accountability architecture that makes moral governance possible.
That means AI that makes meaningful human oversight technically impossible should be refused. AI that is designed to be unauditable should be refused. AI that enables domestic coercion without legal process should be refused. AI that removes identifiable human responsibility from lethal decisions should be refused — not because autonomous execution is always wrong, but because accountability is not optional. And AI must be corrigible - when it makes mistakes it must be able to be corrected.
Everything else — including a great deal of powerful, sensitive, consequential capability — can be permitted if the governance architecture is sound. You don't put a notice on the hammer that it is not meant to be used on skulls. You make a policy that prohibits hurting others.
It isn't like the Pentagon is unfamiliar with policies and procedures. It has detailed rules of what is allowed in battle and detailed manuals on how to use weapons systems. How it uses AI must be as similarly well thought out and, as much as possible, public.
This is not to say that AI shouldn't have morals baked in - it absolutely should. But it should not be expected to understand how it might be manipulated, or second-order consequences from its decisions that are outside its domain. AI, especially in government, is part of a huge ecosystem of software, governance and politics.
The current negotiation is structured to produce a bad outcome for everyone. Anthropic loses its contract or its principles. The Pentagon gets either a constrained tool or an unconstrained one with no safeguards. The American public gets whatever the executive branch decides it can get away with.
A better outcome requires both parties to stop treating this as a contest and start treating it as a design problem. The principles exist. The technical capability exists. The policy framework does not yet exist — but it could, and building it would serve everyone's legitimate interests, including the country's.
Everyone wants the same thing. But they are arguing on the wrong level.
That is the conversation worth having.
|
"He's an Anti-Zionist Too!" cartoon book (December 2024) PROTOCOLS: Exposing Modern Antisemitism (February 2022) |
![]() |
Elder of Ziyon








