Trump Orders Federal Agencies to Phase Out Anthropic Technology Amid Pentagon Dispute
In a striking turn of events, former President Donald Trump announced on Friday his directive for all federal agencies to cease their use of Anthropic technology. This announcement came on the heels of an increasingly public rift between Anthropic, a rising star in the artificial intelligence landscape, and the Pentagon over issues of AI safety and usage in military applications.
The Conflicted Background
Trump’s comments coincided with a critical deadline set by the Pentagon for Anthropic. The Defense Department had demanded that Anthropic allow unrestricted military access to its AI technology or potentially face severe repercussions. Just hours before this deadline, CEO Dario Amodei publicly declared that his company “cannot in good conscience accede” to the Defense Department’s stipulations, igniting the conflict into the public eye.
The tussle centers on how advanced AI systems, like Anthropic’s AI chatbot Claude, could be responsibly utilized within national security frameworks—especially in high-pressure scenarios that may involve lethal force or sensitive data management. While Anthropic may be in a position to absorb the loss of this contract, the implications of the Pentagon’s ultimatum could have far-reaching consequences for the company’s reputation and operational partnerships.
Pentagon’s Ultimatum and Industry Reactions
The ultimatum from the Pentagon’s Secretary Pete Hegseth raised alarms about the potential fallout. If Amodei stood firm, military officials warned they would not only terminate the contract but also categorize Anthropic as a “supply chain risk.” Such a classification is often associated with foreign adversaries and could severely impact Anthropic’s relationships with essential business partners within the tech ecosystem.
Conversely, if Amodei chose to comply, it could lead to a significant loss of trust within the broader AI community. Many professionals are drawn to the field for its commitment to responsible innovation, which could be undermined by compromises related to national security and military use.
Anthropic put forth requests to the Pentagon, seeking firm assurances that Claude would not be deployed for mass surveillance on Americans or in autonomous weapon systems devoid of human oversight. However, discussions quickly escalated from private negotiations to a highly public dispute. Anthropic expressed concern that what appeared to be a “compromise” proposal from the Pentagon came laden with legal jargon that could easily allow for the aforementioned safeguards to be ignored.
The Polarization of Tech Perspectives
Amid this controversy, tensions flared across Silicon Valley. Emil Michael, the Defense Undersecretary for research and engineering, openly criticized Amodei, asserting that he sought to compromise national safety for personal control over military AI applications. In contrast, many tech workers from Anthropic’s competitors—such as OpenAI and Google—sided with Amodei, expressing solidarity through an open letter. These workers warned that the Pentagon’s aggressive tactics could divide tech firms based on fear, fostering a toxic environment.
Elon Musk, with ties to the current administration, seemingly aligned with Trump, asserting on his social media platform that “Anthropic hates Western Civilization.” This comment was in reference to past assessments found in Claude’s operational guidelines, emphasizing the value of diverse perspectives. Yet, it highlights the ongoing discourse surrounding the ethics of AI development and the value systems implicit in AI technology.
In a surprising twist amidst the rivalry, OpenAI CEO Sam Altman publicly expressed trust in Anthropic’s commitment to safety and criticized the Pentagon’s aggressive negotiation tactics. He emphasized that while differences exist between the companies, the principles of safety shared across the industry warrant a more collaborative dialogue rather than divisive ultimatums.
Diverse Perspectives from Military Experts
Former high-ranking military officials, such as retired Air Force General Jack Shanahan, have weighed in on the discourse, critiquing both the Pentagon’s approach and affirming Anthropic’s stance on safety safeguards. Shanahan, who faced backlash during his role in Project Maven during Trump’s earlier administration, noted that painting Anthropic as a high-risk entity generates sensational headlines but could be detrimental to everyone involved.
He argued that AI technologies like Claude are already being employed within various government sectors, and the cautionary lines drawn by Anthropic were reasonable, especially concerning fully autonomous weapon systems.
The Pentagon’s Course of Action
As the deadline loomed, Pentagon officials underscored the necessity of ensuring that military operations remain unvulnerable to commercial constraints. Sean Parnell articulated the Pentagon’s position, asserting that no company should dictate the framework of military decision-making. He indicated that the military’s acceptance of Anthropic’s technology could be contingent on bypassing any compromises that might jeopardize critical operations.
Hegseth’s interactions with Amodei reportedly emphasized tangible consequences for noncompliance, indicating that the Pentagon could use the Defense Production Act—a law from the Cold War era—to compel the use of Anthropic’s technology in military applications, even without consent.
Amodei responded to these threats with cautious optimism regarding Claude’s utility but reiterated that if these compromises were non-negotiable, Anthropic would be prepared to transition to alternate providers.

