Trump Directs Federal Agencies to Cease Use of Anthropic’s AI Technology
On a significant turn in U.S. federal technology policy, former President Donald Trump announced on Friday the immediate cessation of all federal agencies’ use of Anthropic’s AI technology. This directive extends to the Department of Defense and is positioned as an essential step in re-evaluating the government’s relationship with frontier AI firms. Alongside the abrupt halt, Trump mentioned a six-month phase-out period during which agencies that rely on Anthropic’s tools, such as their popular AI platform Claude, would ultimately need to transition away from their technologies.
- Trump Directs Federal Agencies to Cease Use of Anthropic’s AI Technology
- Background to the Directive
- Presidential Mandate and Consequences
- Defense Secretary’s Position
- The Nature of Discourse
- Amodei’s Defense
- Anthropic’s Establishment and Federal Contracts
- GSA’s Actions
- Public Response and Ongoing Discussions
Background to the Directive
Trump’s declaration followed tense negotiations between Anthropic and the Pentagon. The tech firm, renowned for its AI capabilities developed in its San Francisco headquarters, had been integrated into both classified and unclassified networks of the department. However, tensions arose due to Anthropic’s refusal to provide the Pentagon with unrestricted access to its models, which the Defense Department deemed crucial for operational flexibility.
Amidst this backdrop, Anthropic CEO Dario Amodei provided insight into the firm’s ethical stance. In a Thursday statement, he emphasized that the use of Claude would not extend to mass surveillance of U.S. citizens or the support of fully autonomous weapons. This cautious approach was framed by Trump as an attempt to “strong arm” the Defense Department into abiding by their terms of service.
Presidential Mandate and Consequences
In his post on Truth Social, Trump was emphatic: “We don’t need it, we don’t want it, and will not do business with them again.” The potential repercussions for Anthropic were made clear, as Trump warned of serious civil and criminal consequences if the company did not cooperate with the planned phase-out of their technologies. This stringent stance highlights the administration’s resolve in prioritizing national security over partnerships with AI firms.
Defense Secretary’s Position
Following Trump’s announcement, Secretary of Defense Pete Hegseth ordered that Anthropic be designated as a “Supply-Chain Risk to National Security.” This declaration raised eyebrows due to its contradictory nature; the Pentagon continues to utilize the technology for up to six more months despite labeling it a risk. Amodei pointedly noted this inconsistency in his response, underscoring the confusion surrounding the Pentagon’s decision-making.
The Nature of Discourse
The language used by Trump, Hegseth, Pentagon spokesperson Sean Parnell, and Defense Undersecretary Emil Michael was notably aggressive. Hegseth’s remarks included accusations against Anthropic, framing the company’s actions as “cowardly” and devoid of responsibility towards American safety. Michael took it further by labeling Amodei a “liar” and accusing him of attempting to control the U.S. military with his ethical constraints.
Amodei’s Defense
Countering the heavy rhetoric from Pentagon officials, Amodei reiterated Anthropic’s stance. He stated that the limits placed on AI usage—specifically regarding mass surveillance and autonomous weapons—were safeguards designed to protect democratic values. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” Amodei asserted, focusing on the ethical implications of military applications.
Anthropic’s Establishment and Federal Contracts
Since its founding in 2021, Anthropic has gained significant traction within the federal government, leveraging a partnership with Amazon Web Services. This collaboration has allowed the firm to establish a foothold in critical Defense Department initiatives, including substantial contracts aimed at enhancing the military’s capabilities in AI. In July 2025, Anthropic, alongside other leading AI firms, secured contracts worth $200 million to bolster the Pentagon’s AI efforts.
GSA’s Actions
The General Services Administration (GSA) also weighed in by announcing that it would remove Anthropic from its Multiple Award Schedule. This decision signifies a broader governmental shift away from the company and aims to encapsulate Trump’s directive across various branches of the federal government. GSA Administrator Edward C. Forst emphasized the commitment to national security in his statement, reinforcing a narrative that aligns corporate ethics with national interests.
Public Response and Ongoing Discussions
As the situation unfolds, the robust dialogue between Anthropic and government officials reveals deeper questions about the ethical roles of AI in national security. Anthropic has expressed its willingness to collaborate within the bounds of lawful use and has communicated that its technology does not pose a risk to active government missions to date.
The friction between an emerging tech company aiming to prioritize ethical considerations and a government that prioritizes operational readiness illustrates a uniquely modern conflict, reminiscent of larger, historical discussions about the intersections of technology, ethics, and governance.

