Anthropic Rejects Pentagon Request to Loosen AI Guardrails

Anthropic has publicly refused to weaken its AI safeguards, even after direct pressure from the US Department of Defense. The dispute centers on the Pentagon’s request that Anthropic allow “any lawful use” of its Claude AI technologies by the US military.

In a formal statement, CEO Dario Amodei made the company’s position clear: Anthropic will not revise two core policies governing mass domestic surveillance and fully autonomous weapons. He emphasized that external pressure would not shift the company’s stance, stating that it could not “in good conscience” agree to the Pentagon’s request.

The Pentagon has reportedly set a deadline for Anthropic to comply. If the company refuses, it risks termination of its partnership and designation as a supply chain risk—effectively restricting other military contractors from working with it.

AI Policy Dispute Over Mass Surveillance and Autonomous Weapons

Anthropic’s Position on Mass Domestic Surveillance

Anthropic has identified mass domestic surveillance of American citizens as one of the prohibited use cases for its AI systems. While the company acknowledges that AI-driven mass surveillance may remain legal under current law, it argues that legal frameworks have not yet caught up to the expanding capabilities of artificial intelligence.

According to Amodei, certain applications of AI can undermine democratic values rather than defend them. The company’s refusal is rooted in a broader ethical concern: the belief that rapidly evolving AI systems could enable large-scale surveillance practices that challenge civil liberties.

In contrast, the Department of Defense has publicly stated it has “no interest” in using AI to conduct mass surveillance of Americans and called narratives suggesting otherwise “fake.” However, it continues to push for unrestricted lawful use of Anthropic’s models.

Concerns About Fully Autonomous Weapons Systems

The second major point of contention involves fully autonomous weapons—systems capable of making decisions and engaging targets without human input.

Anthropic maintains that while such systems could potentially support national defense in the future, the current technology is not reliable or safe enough. The company has explicitly stated it will not knowingly provide products that could put warfighters or civilians at risk.

Amodei emphasized that today’s AI systems are not yet capable of safely and consistently handling life-and-death military decisions without meaningful human oversight. He also noted that Anthropic offered to collaborate with the Department of Defense on research and development to improve reliability, but that offer was reportedly declined.

Pentagon Response and Escalating Tensions

The dispute has intensified publicly. Senior Defense Department officials have criticized Anthropic’s position, with Under Secretary of War Emil Michael describing the CEO in harsh terms and accusing him of jeopardizing national safety.

Defense Department spokesman Sean Parnell reiterated that the Pentagon’s request is limited to lawful purposes and framed it as a “common-sense” measure necessary to prevent disruption of critical military operations. He also stated that the Department would not allow any private company to dictate operational decisions.

The Pentagon has reportedly issued a deadline—5:01 p.m. ET on a specified Friday—after which it may terminate the partnership. Additionally, Defense Secretary Pete Hegseth is said to be exploring whether the Defense Production Act could be used to compel broader access to Anthropic’s AI systems.

If the partnership ends, Anthropic has pledged to facilitate a smooth transition to prevent disruption to military planning or ongoing missions.

Industry Context: How Other AI Companies Responded

According to government officials cited in reporting, other major AI providers—including Google, OpenAI, and xAI—have agreed to the Pentagon’s requested changes on unclassified networks. Negotiations regarding classified network use are reportedly ongoing.

This contrast places Anthropic in a distinct position within the AI industry. While competitors appear willing to expand military access under certain conditions, Anthropic is drawing a firm line around specific high-risk use cases tied to surveillance and autonomous weapons.

The situation highlights a broader industry tension: balancing national security collaboration with ethical AI deployment standards.

National Security, Democratic Values, and AI Governance

Amodei has expressed strong support for using AI to defend the United States and allied democracies. However, he has also warned that in narrowly defined cases, AI can undermine the very democratic principles it is meant to protect.

The company’s stance reflects a governance philosophy centered on responsible AI deployment, particularly in high-stakes domains such as military operations. By refusing unrestricted military use, Anthropic is signaling that compliance with lawful standards alone may not satisfy its internal safety thresholds.

The Pentagon, on the other hand, argues that operational flexibility is essential and that limiting lawful applications could jeopardize readiness and national defense effectiveness.

This conflict illustrates a fundamental question shaping the future of artificial intelligence: who ultimately determines acceptable use—governments, corporations, or evolving legal frameworks?