The U.S. government has formally designated Anthropic, a leading artificial intelligence firm, as an “unacceptable” national security risk. This decision stems from concerns that the company could potentially manipulate its AI technology – including its popular Claude chatbot – to prioritize its own interests over U.S. strategic objectives, particularly in a conflict scenario.

Government Concerns Over AI Manipulation

In a 40-page court filing submitted to the U.S. District Court for the Northern District of California, government lawyers argued that AI systems are “acutely vulnerable to manipulation.” Granting Anthropic access to Department of Defense (DoD) infrastructure, they contend, would introduce unacceptable vulnerabilities into military supply chains. The filing highlighted that Anthropic’s control over its technology creates a risk that it could disable or alter systems in ways detrimental to U.S. warfighting capabilities.

Anthropic’s Response and Ongoing Legal Battle

Anthropic has publicly countered these claims, citing statements from CEO Dario Amodei, who emphasized that military decisions on AI usage rest with the armed forces, not his company. Amodei stated that Anthropic has never objected to or limited military operations involving its technology.

However, the government’s stance has triggered legal action. On March 9th, Anthropic filed two lawsuits challenging Defense Secretary Pete Hegseth’s recent designation of the company as a “supply chain risk” – one in the California district court and another in the U.S. Court of Appeals for the District of Columbia Circuit.

Why This Matters: The Broader Context

This dispute underscores a growing tension between the rapid development of AI and national security considerations. Governments worldwide are increasingly scrutinizing AI companies, particularly those with access to sensitive military or government systems. The case highlights the inherent risks of relying on privately-controlled AI infrastructure in critical defense applications.

The U.S. government’s move signals a broader trend toward stricter oversight of AI supply chains, potentially leading to more stringent regulations for AI firms operating within national security domains. The outcome of Anthropic’s legal challenges will likely set a precedent for how governments manage AI risks in the future.

The situation is further complicated by the fact that the government uses the term “Department of War” instead of “Department of Defense,” a preference from the Trump administration, adding another layer of political context to the dispute.

Ultimately, the U.S. government’s assessment of Anthropic as a national security risk represents a critical juncture in the evolving relationship between AI technology and defense strategies.