Trump Orders Government Agencies to Cut Ties With Anthropic
Home > AI, Cloud & Data > Article

Trump Orders Government Agencies to Cut Ties With Anthropic

Photo by:   Unsplash
Share it!
Diego Valverde By Diego Valverde | Journalist & Industry Analyst - Tue, 03/03/2026 - 12:12

The Donald Trump administration’s decision to cut federal ties with Anthropic over military AI restrictions underscores rising tensions between ethical AI governance and national security. The move signals heightened regulatory risk for AI providers serving government-linked industries, including defense, finance, and critical infrastructure.

 

The administration of US President Donald Trump has ordered federal agencies and military contractors to terminate all business relationships with Anthropic, following a dispute over restrictions on the use of AI in national security operations. The directive includes a six-month phaseout period and a designation of the company as a supply chain risk.

Anthropic confirms that its refusal to permit unrestricted use of its technology triggered the government response. In a statement reported by CNN, the company says: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”  The company adds that it intends to challenge any supply chain risk designation through legal action.

Anthropic’s Claude model was the first large language model authorized to operate within classified networks of the Pentagon. In mid-2024, the company secured a contract valued at up to US$200 million to support defense-related applications, according to reporting by Bloomberg.

Unlike other AI providers working with the Department of Defense, Anthropic embedded explicit usage restrictions into its government contracts. These restrictions prohibit the deployment of Claude in fully autonomous weapons systems and in mass domestic surveillance involving US citizens. The company positions these safeguards as part of its broader safety-oriented development framework, which it publishes through regular technical and governance reports.

Pentagon officials argue that such limitations are incompatible with military operational requirements. According to CNN and the BBC, defense officials state that decisions regarding lawful use rest with the government as the end user, not with private suppliers. The Department of Defense argues that reliance on vendor-imposed restrictions could delay or constrain responses in national security scenarios.

Tensions escalated after a meeting between US Secretary of Defense Pete Hegseth and Dario Amodei, Chief Executive Officer, Anthropic. While sources cited by the BBC describe the meeting as professional, the administration’s position hardened days later after Anthropic publicly reaffirmed its refusal to amend its usage policies.

Shortly afterward, President Trump announced via Truth Social that federal agencies would be instructed to end their use of Anthropic products. The administration also directed that Anthropic be classified as a supply chain risk, a designation typically associated with vendors viewed as incompatible with US national security priorities.

The policy shift has extended beyond the Department of Defense. According to Reuters, the US Treasury Department confirms that it is terminating all use of Anthropic products, including the Claude platform. Treasury Secretary Scott Bessent states on X that the decision aligns with the president’s directive.

The Federal Housing Finance Agency (FHFA) has taken similar action. William Pulte, Director, FHFA, confirms that the agency and the mortgage entities it oversees, including Fannie Mae and Freddie Mac, are ending all use of Anthropic technology. These moves indicate that the administration intends to apply the policy consistently across both defense and civilian agencies.

Operationally, the decision introduces complexity for the Pentagon. Anthropic’s systems are integrated into internal workflows, and replacing them requires system migration, validation, and security testing. Pentagon officials acknowledge that alternative AI models exist but do not yet match Claude’s performance in classified environments.

Other AI providers are moving quickly to fill the gap. OpenAI announces a new agreement to deploy its technology within the Department of Defense’s classified networks shortly after the Anthropic decision becomes public, according to Reuters. While OpenAI leadership expresses alignment with concerns about military AI governance, the company does not impose comparable contractual restrictions.

For Anthropic, the immediate revenue impact of losing a US$200 million defense contract is limited relative to its estimated valuation of about US$380 billion, as reported by Bloomberg. The more significant risk stems from the supply chain risk designation. Under this classification, companies working with the US military must demonstrate that Anthropic technology is not used in defense-related projects.

Given that Anthropic’s growth strategy relies heavily on large enterprise customers, many of which maintain government contracts or pursue future public-sector work, this designation could restrict its addressable market. Adam Connor, Vice President for Technology Policy, Center for American Progress, says that a substantial portion of Anthropic’s customer base may reassess relationships to avoid regulatory exposure.

The administration previously signaled that it may invoke the Defense Production Act of 1950 to compel cooperation from Anthropic on national security grounds, according to the BBC. The final directive does not clarify whether that authority will be used. Legal experts cited by Bloomberg note that simultaneously compelling cooperation while designating a supplier as a supply chain risk would raise unresolved legal questions.

The dispute unfolds as the White House frames AI leadership as a strategic priority comparable to Cold War-era technological competition. While the administration emphasizes operational control, the decision also highlights the growing tension between ethical AI commitments and national security procurement requirements.

At this stage, neither the Pentagon nor the White House has indicated whether further actions against Anthropic or other AI providers are forthcoming. 

Photo by:   Unsplash

You May Like

Most popular

Newsletter