As many AI builders continue operating in a “move fast and break things” moment, one major company has signaled there are limits it won’t abandon—even under intense government pressure.
Anthropic, the maker of Claude, says it “cannot in good conscience” comply with demands from the Pentagon and Secretary of Defense Pete Hegseth that would grant the US military broad, unrestricted access to its systems.
The company’s resistance centers on internal ethical boundaries it says it will not waive, including prohibitions on using its models to facilitate mass surveillance of Americans or to help develop fully autonomous weapons.
According to reporting, the Department of Defense is escalating the dispute by threatening to label Anthropic a “supply chain risk” if it refuses to accept terms that would allow open-ended military use—an action that could discourage or prevent other firms from adopting Anthropic’s technology.

Typically applied to foreign-linked organizations viewed as potentially undermining US interests, such a designation could be especially damaging for the five-year-old firm.
On top of losing business from partners wary of a supply-chain designation, the DoD could also terminate a reported $200 million deal related to Claude if Anthropic won’t permit use “for all lawful purposes,” Hegseth warned CEO Dario Amodel on Tuesday.
Anthropic has pushed back on that “lawful purposes” framing, later arguing in a statement that the wording functions as “legalese” that would ultimately “allow those safeguards to be disregarded at will.”
After months of private discussions that failed to produce an agreement, Hegseth has reportedly issued a firm deadline for Anthropic to sign on to the Pentagon’s terms.
Axios has reported that Amodel has until the end of today to agree, or face the consequences.

Amodel addressed the standoff in a lengthy statement on Thursday, arguing that the conflict is not about refusing to work with the US military, but about the government insisting on terms that could override Anthropic’s stated guardrails.
He said: “Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place. We remain ready to continue our work to support the national security of the United States.”
He also emphasized that Anthropic is not seeking to sit inside Pentagon decision-making, while warning that removing meaningful constraints from advanced AI could produce outcomes that run counter to democratic norms—saying that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
The dispute has turned heated. One government negotiator involved in the long-running talks publicly attacked Amodel, accusing him of dishonesty and of trying to dictate terms to the armed services.
Emil Michael, the Pentagon’s Undersecretary for Research and Engineering, said on X: “It’s a shame that Dario Amodel is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.
“The Department of War will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”

