U.S. President Donald Trump has taken a decisive step that is set to reshape the artificial intelligence landscape by ordering all federal agencies to sever ties with Anthropic. The decision follows the company’s refusal to allow its AI model, Claude, to be used for military purposes and “lethal” operations. Labeling the company’s stance as “woke,” Trump has initiated an official boycott across all government institutions.
What is Anthropic: A safety-first approach to AI
Founded in 2021 by former OpenAI executives, San Francisco-based Anthropic is widely recognized for its “safety-centric” and ethical approach to AI development. Backed by billions in investment from tech giants like Google and Amazon, the company trains its flagship model, Claude, using a method called “Constitutional AI.” This framework ensures that the AI adheres to specific ethical principles and avoids engaging in activities that violate human rights. Anthropic maintains a strict “red line” against its technology being utilized in autonomous weapons systems or direct lethal military operations.
The standoff with the Department of Defense
The crisis was triggered when Anthropic CEO Dario Amodei opposed the integration of AI systems into autonomous weaponry and large-scale domestic surveillance programs. The company argued that such applications carry significant technological risks and demanded ethical safeguards and security guarantees from the Pentagon.
In response, Secretary of Defense Pete Hegseth asserted that the American military must be able to utilize AI without restrictions to maintain a competitive edge. The Pentagon issued an ultimatum, giving Anthropic until Friday at 5:01 PM to accept these terms. Anthropic ultimately refused to compromise on its ethical stance, rejecting the ultimatum.
Trump’s statement: “Woke” label and a six-month deadline
Following Anthropic’s refusal to back down, President Donald Trump released a statement on social media, characterizing the company as “woke.” Viewing the company’s reluctance to participate in military-grade AI applications as a hindrance, the administration has ordered all federal agencies to immediately cease the use of Anthropic’s technology.
Agencies currently utilizing the technology have been granted a six-month transition period to purge their systems and migrate to alternative platforms. Trump warned that failure to cooperate during this divestment process could result in severe civil and criminal penalties for the institutions and executives involved.
A deepening divide in tech and defense
Pentagon officials are now discussing whether to designate Anthropic’s stance as a “supply chain risk.” Such a classification could jeopardize the company’s partnerships not only with the government but also with major private sector defense stakeholders. While other industry giants like Google, Meta, and xAI have shown more flexibility in working with the Department of Defense, Anthropic’s singular stance has reignited the debate between ethical boundaries and military supremacy.
Conclusion and future outlook
This official boycott effectively terminates Anthropic’s $200 million government contract signed last year. As federal agencies prepare to remove Claude models from their infrastructure within the next six months, the ethical debate over AI’s role on the battlefield is expected to intensify under the Trump administration.
Donald Trump, Anthropic, Claude, Pentagon, Pete Hegseth, Dario Amodei, artificial intelligence, defense industry, autonomous weapons, federal ban, US government, technology ethics, Department of Defense, Constitutional AI, Google, Amazon, OpenAI
