In the fast-moving world of artificial intelligence, companies typically measure progress in model improvements, commercial deals and research breakthroughs. For Anthropic, the San Francisco-based AI firm behind the language system Claude, the focus has long been on building advanced systems with safety guardrails and ethical limits.
But in recent days, that familiar arc of technical innovation shifted into unfamiliar terrain: geopolitics, national defense and very public pressure. U.S. Defense Secretary Pete Hegseth has delivered a clear ultimatum to Anthropic’s leadership: agree to let the United States Department of Defense use Claude broadly without the company’s current restrictions by Friday, February 27, 2026, or face consequences that could imperil its role in U.S. defense work.
The deadline, set during a meeting between Hegseth and Anthropic CEO Dario Amodei, puts the startup in a rare spotlight. Defense officials are pressing for “unrestricted” access to Claude for all military applications they consider lawful — a stance that, by definition, would remove the safety limitations Anthropic currently places on its technology. Sources familiar with the talks say the Pentagon warned that refusal could lead to loss of government contracts, a designation as a “supply chain risk,” or even the use of emergency powers under the Defense Production Act to compel compliance.
The clash underscores how rapidly the context around artificial intelligence has broadened. A technology born in research labs and private companies is now central to national security conversations. Claude, which had already been integrated into U.S. classified systems and used by military partners, sits at the intersection of capability and controversy.
Behind the briefings and public statements is a deeper tension: what limits, if any, a private company can set on how its tools are used, and how those limits should engage with government demands in times of perceived strategic competition. For years, Anthropic and its peers have laid out policies about where and how their models should be applied. This negotiation with the Pentagon, however, brings those policies into direct contact with the machinery of state power.
The deadline adds urgency. In many ways, startups are accustomed to rapid decision-making, but negotiating with a major government institution under a defined timeline is a different kind of pressure. The interaction is unfolding not just as a business matter but as a broader story about how society balances innovation, ethics and national security in an era when advanced algorithms have real influence far beyond corporate offices.
As the week advances toward Friday’s cutoff, the negotiation will likely reverberate beyond Anthropic itself. Whether the company accedes to the Pentagon’s terms, stands firm or seeks compromise, the outcome will shape expectations for how private AI labs engage with government demands in the years ahead.