Categories
Beyond

US halts use of Anthropic AI

Row over military safeguards drives federal ban

US President Donald Trump has ordered federal agencies to stop using artificial intelligence tools developed by Anthropic, marking a sharp escalation in tensions between the administration and the fast-growing AI sector.

The directive requires departments to begin phasing out Anthropic’s systems, including those already embedded in administrative and defence operations. According to officials familiar with the decision, the move follows weeks of disagreement over limits placed on the company’s flagship AI model, Claude, particularly in military contexts.

At the heart of the conflict are safeguards built into Anthropic’s technology. The company has imposed restrictions designed to prevent applications such as mass domestic surveillance and fully autonomous weapons. Representatives from the Department of Defense have argued that those constraints reduce operational flexibility and complicate legitimate national security planning.

Trump described the decision as necessary to protect executive authority and ensure that government agencies are not constrained by private-sector policies. The administration is reportedly reviewing existing contracts and examining whether additional regulatory steps could follow.

Anthropic’s chief executive, Dario Amodei, defended the company’s approach, stating that its safety guardrails are central to responsible AI deployment. He warned that removing such protections could lead to unintended and potentially dangerous outcomes. The firm has indicated it may pursue legal avenues if further punitive measures are imposed.

The dispute has drawn significant attention across Silicon Valley, where AI companies are increasingly partnering with government agencies.

Also Read: OpenAI wins Pentagon deal as Donald Trump clashes with Anthropic

Leave a Reply

Your email address will not be published. Required fields are marked *