San Francisco, CA – A dispute between the U.S. Department of Defense and artificial intelligence firm Anthropic has escalated, culminating in an order from former President Donald Trump to federal agencies to cease using Anthropic’s technology. The conflict centers on the Pentagon’s desire for unfettered access to Anthropic’s AI models for military applications, a request the company is resisting due to concerns about the ethical implications of deploying AI in warfare and surveillance. This situation highlights the growing tension between the rapid advancement of artificial intelligence and the need for responsible development and deployment, particularly within the defense sector.
The core of the disagreement lies in Anthropic’s reluctance to relinquish control over the safety features embedded within its AI software. The Pentagon aims to utilize AI models across all levels of classification to accelerate military decision-making, including within highly secretive networks. However, Anthropic insists its AI should not be used for mass surveillance within the United States or in fully autonomous weapons systems. This stance has drawn sharp criticism from Trump, who accused the company of being “left-radical” and “woke,” and vowed to sever ties with Anthropic altogether.
On his social media platform, Truth Social, Trump stated, “The United States of America will never allow a left-radical, woke company to dictate how our great military fights and wins wars.” He further declared, “We don’t need them, we don’t want them, and we will not be doing business with them anymore.” Agencies currently utilizing Anthropic’s products, including the Department of Defense, have been given a six-month transition period to find alternative solutions. Trump similarly threatened Anthropic with the “full power of the office of the President” and warned of “severe civil and criminal consequences” should the company not become “cooperative.”
Pentagon’s Push for Unrestricted AI Access
The Department of Defense’s push for broader access to AI technology is driven by a desire to maintain a competitive edge in a rapidly evolving geopolitical landscape. According to a report by Tagesschau, the Pentagon issued an ultimatum to Anthropic, demanding full access to its AI technology by February 26, 2026, at 5:01 PM. Failure to comply could result in the cancellation of a potential deal worth up to $200 million, initially awarded in the summer of 2025 as part of a larger effort to integrate AI into military operations.
Defense Minister Pete Hegseth has indicated the Pentagon’s willingness to invoke the Defense Production Act (DPA), a Cold War-era law that grants the government significant control over companies and their products in the name of national security. The DPA was previously utilized during the COVID-19 pandemic to address supply chain issues in the medical sector. Applying the DPA to Anthropic would compel the company to release its products for “all legitimate purposes” of the military, effectively overriding its stated ethical concerns. Hegseth also threatened to classify Anthropic as a supply chain risk, a designation typically reserved for companies linked to adversarial nations like Russia or China.
Anthropic’s Firm Stance on Ethical Boundaries
Anthropic CEO Dario Amodei has publicly stated the company’s unwavering commitment to its ethical principles. As reported by Zeit Online, Amodei explained that the Defense Department is seeking to contract only with AI companies that agree to “any lawful use” and remove existing safeguards. The Pentagon reportedly threatened to remove Anthropic from its systems and label the firm as a “supply chain risk” if it refused to comply.
“We cannot in good conscience meet the demand,” Amodei stated, emphasizing that while the Department of Defense is free to choose partners aligned with its vision, Anthropic remains steadfast in its commitment to responsible AI development. He expressed hope that the Pentagon would reconsider its position, given the significant value Anthropic’s technology could bring to the armed forces. This stance reflects a broader debate within the AI community regarding the ethical responsibilities of developers when their technology is applied to military applications.
The Broader Implications of the Conflict
This dispute between Anthropic and the U.S. Department of Defense is not an isolated incident. It represents a growing tension between the desire for technological advancement and the need for ethical considerations in the development and deployment of artificial intelligence. The Pentagon’s aggressive pursuit of unrestricted AI access raises concerns about the potential for autonomous weapons systems and the erosion of human oversight in critical decision-making processes.
The situation also highlights the increasing importance of AI as a strategic asset in modern warfare. Nations are investing heavily in AI research and development, recognizing its potential to revolutionize military capabilities. However, the lack of clear international regulations and ethical guidelines governing the use of AI in warfare creates a dangerous vacuum, increasing the risk of unintended consequences and escalating conflicts. A recent study, as noted by Tagesschau, indicates that artificial intelligence has become one of the biggest business risks for companies.
The Defense Production Act and its Potential Use
The potential invocation of the Defense Production Act (DPA) by the Pentagon is a significant escalation in this conflict. Originally enacted during the Korean War in 1950, the DPA allows the U.S. Government to prioritize contracts and compel private companies to produce goods deemed essential for national defense. While historically used to address supply chain shortages, its application to a technology company like Anthropic would be unprecedented and could set a dangerous precedent for government control over the AI industry. The DPA’s broad authority raises concerns about potential infringements on corporate autonomy and the stifling of innovation.
Anthropic’s Position in the AI Landscape
Anthropic, founded by former OpenAI researchers, has quickly established itself as a leading player in the AI space. The company is known for its focus on building safe and reliable AI systems, prioritizing transparency and interpretability. Its Claude series of AI models are designed to be less prone to generating harmful or biased outputs compared to some other large language models. This commitment to safety is precisely what has brought Anthropic into conflict with the Pentagon, as the Department of Defense seeks to remove these safeguards to maximize the utility of the technology for military purposes.
What Happens Next?
The immediate future of this conflict remains uncertain. While Donald Trump’s order to federal agencies to cease using Anthropic’s technology is in effect, the long-term implications are still unfolding. The Pentagon could proceed with invoking the Defense Production Act, potentially forcing Anthropic to comply with its demands. Alternatively, the two parties could attempt to negotiate a compromise that addresses the Pentagon’s security concerns while preserving Anthropic’s ethical principles. The outcome of this dispute will likely have a significant impact on the future of AI development and its role in national security.
As of February 28, 2026, the situation remains at a standstill, with no indication of imminent negotiations. The next key development will likely be the Pentagon’s decision on whether to formally invoke the Defense Production Act. Readers can stay updated on this evolving story through official statements from the Department of Defense and Anthropic, as well as reporting from reputable news organizations.
Do you feel the U.S. Government should have the power to compel AI companies to prioritize military applications? Share your thoughts in the comments below.