Something happened this week that I can’t recall ever seeing in the tech industry. An American artificial intelligence company, valued at $380 billion, has said no to the U.S. Department of Defense. The Pentagon. To the most powerful government in the world.
And the government has responded by declaring it a threat to national security.
It sounds like a movie. But it isn’t. That’s what has happened between Anthropic, the company behind Claude, and the Trump administration in the final days of February 2026. And the consequences are proving unexpected….
Let’s break it all down because I think it’s one of those stories that, years from now, we’ll look back on as a turning point.
Anthropic and the Pentagon: how it all began
To understand the story, you have to know that Anthropic is not just any company. It was founded by Dario and Daniela Amodei, who were formerly at OpenAI and left precisely because they wanted to build a more secure AI. That is literally their raison d’être.
Claude, its artificial intelligence model, was until a few days ago the only advanced AI model operating within the Pentagon’s classified networks. Yes, you read that right. A contract worth up to $200 million. Claude was integrated into real military operations through partners like Palantir, helping to process and analyze massive amounts of data in real time.
Everything was going more or less fine until the Pentagon wanted more.
The ultimatum
In mid-February, negotiations between Anthropic and the Department of Defense became tense. The Pentagon wanted Claude to be available for “any lawful use” without restrictions. Anthropic insisted on two very specific red lines:
No to mass surveillance of American citizens. No to lethal autonomous weapons without human oversight.
On Feb. 24, Defense Secretary Pete Hegseth gave Dario Amodei an ultimatum: you have until 5:01 p.m. on Friday the 27th to give in. Or you lose the contract.
Anthropic's response
On February 26, Amodei posted a statement on Anthropic’s website that is worth reading in its entirety. The summarized version is this: “We cannot in good conscience agree to what they are asking us to do.”
Their arguments were twofold. First, that domestic mass surveillance is incompatible with democratic values, no matter who does it. And second, and this seems particularly relevant to me, is that current AI models are simply not reliable enough to control autonomous weapons. This is not an abstract philosophical question. It’s that the technology is not ready, and to put it in charge of deciding who lives and who dies is to put both soldiers and civilians at risk.
Anthropic made it clear that it had no problem with Claude’s general use in the military. In fact, they accepted its use in missile defense and in operations such as the capture of Nicolas Maduro, in which Claude had participated. The problem was specific: mass surveillance and fully autonomous weapons.
Retaliation: when the government declares you an enemy
On February 27, a lot of things happened very quickly.
Trump ordered all federal agencies to immediately cease use of any Anthropic products. Hegseth designated the company as a “national security supply chain risk,” a label that had until then been used only for foreign adversaries, such as Chinese companies. Never with an American company. And, in addition, he required any military contractor to certify that it did not use Claude in its workflows, with a six-month transition period.
The 200 million contract, eliminated.
Legal and foreign policy experts reacted immediately. The Council on Foreign Relations published an analysis. Several legal experts said the decision was “probably illegal.” Anthropic announced that it would challenge the appointment in court.
The Nuclear Missile Incident (yes, really)
In the midst of all this, the Washington Post revealed a detail that sounds like something out of a TV series. In a meeting, the Pentagon’s chief technology officer posed a hypothetical scenario to Amodei: if an intercontinental ballistic missile is launched against the United States, can the military use Claude to help shoot it down?
According to the Pentagon, Amodei replied something along the lines of “call us, and we’ll sort it out.” According to Anthropic, that is “flatly untrue,” and the company has always accepted Claude’s use in missile defense. Conflicting versions, but the fact that we’re talking about nuclear strike scenarios to negotiate an AI contract says a lot about where we’re at.
OpenAI enters the scene: perfect timing
Hours after Trump vetoed Anthropic, OpenAI announced it had closed a deal with the Pentagon to provide its models on classified networks. If you can’t run, fly!
In this regard, Sam Altman published a post stating that this agreement included safeguards similar to those requested by Anthropic. Some of them: no mass surveillance, no autonomous weapons, and human responsibility in the use of force.
However, there is an important nuance that several analysts have pointed out: OpenAI claims that its safeguards prohibit the “unrestricted” collection of private information from Americans, but Anthropic went further: it was concerned that AI could enhance the legal collection of public data, social media posts, and geolocation, which is technically legal but, in practice, constitutes mass surveillance.
This distinction between private and public data is key and will likely be a major regulatory debate in the coming years. OpenAI also limits deployment to the cloud, which, according to them, physically prevents their models from being integrated into weapons systems. It is an interesting technical argument, although experts have already pointed out that it does not close all the loopholes.
The reaction no one expected: Claude at number 1
And now comes the really surprising part.
Claude, Anthropic’s app, rose to number one in downloads on the US App Store on Saturday, February 28. It surpassed ChatGPT. An app that wasn’t even in the top 100 until the end of January.
The leaked numbers are impressive. Daily registrations broke absolute records every day in the last week of February. Free users grew by more than 60% since January. Paid subscribers have more than doubled since October. And Anthropic’s annualized revenue reached $14 billion, ahead of the most recent figure reported by OpenAI.
A spontaneous movement began on social media, with people leaving ChatGPT and switching to Claude in protest against OpenAI’s agreement with the Pentagon.
The letter from Google and OpenAI employees
Another unexpected twist. More than 430 employees from Google and OpenAI, yes, from competing companies, signed an open letter supporting Anthropic’s position.
The letter calls on Google and OpenAI leaders to “put aside their differences and stand firm” in rejecting the Pentagon’s demands. 300 Google employees, more than 60 from OpenAI, and others from smaller companies in the sector. They specifically asked that no AI company allow its models to be used for domestic mass surveillance or to “kill people autonomously without human oversight.”
Having employees of your competitor publicly defend you against their own country’s government… that’s not something you see every day.
What it all means (to you and me)
I believe this story matters much more than it seems at first glance. And not just if you’re interested in technology.
Firstly, we are at a point where AI companies are making decisions that previously only governments made. Who can be monitored, what weapons can operate autonomously, where is the line between national security and civil rights. These questions are no longer debated only in parliaments. They are debated in the meeting rooms of San Francisco startups.
Second, the market has spoken. And it has said that it prefers a company with principles to one that gives in to pressure. Claude’s numbers after the veto are proof of this. This could change the incentives for the entire industry: if saying no to power wins you customers, more companies will say no.
And third, which for me is the most important thing: AI is not ready to make life-and-death decisions. I’m not saying this. The company that makes the AI is saying it. When the creator of the tool tells you, “My tool is not reliable for this yet,” maybe we should listen.
The $110 billion round that OpenAI closed the same week, with Amazon, Nvidia and SoftBank, reminds us that there are obscene amounts of money flowing into artificial intelligence. OpenAI’s valuation is already at $730 billion pre-money ($840 billion post-money). Anthropic at 380 billion. These numbers are so big that they are hard to process.
With that level of money at stake, the pressure to make ethical concessions will be enormous. What Anthropic has done this week, giving up $200 million in government contract and all federal business for upholding two red lines, is, to say the least, remarkable.
Whether it is sustainable in the long term… that’s another question.
Have a good week!
——————-
Article sources:
- Anthropic – “Statement: Department of War” (26 Feb 2026)
- CNN – “Anthropic rejects latest Pentagon offer” (26 Feb 2026)
- CNBC – “Anthropic faces lose-lose scenario in Pentagon conflict” (27 Feb 2026)
- TechCrunch – “OpenAI’s Sam Altman announces Pentagon deal” (28 Feb 2026)
- OpenAI – “Our agreement with the Department of War” (28 Feb 2026)
- Axios – “Claude beats ChatGPT in U.S. app downloads” (1 Mar 2026)
- TechCrunch – “Employees at Google and OpenAI support Anthropic’s booth” (27 Feb 2026)
- NPR – “OpenAI announces Pentagon deal after Trump bans Anthropic” (Feb 27, 2026)
- TechCrunch – “OpenAI raises $110B” (27 Feb 2026)
- Council on Foreign Relations – “Anthropic and Pentagon Clash” (feb 2026)
- Fortune – “OpenAI sweeps in to snag Pentagon contract” (28 Feb 2026)
The $110 billion round that OpenAI closed the same week, with Amazon, Nvidia and SoftBank, reminds us that there are obscene amounts of money flowing into artificial intelligence. OpenAI’s valuation is already at $730 billion pre-money ($840 billion post-money). Anthropic at 380 billion. These numbers are so big that they are hard to process.
With that level of money at stake, the pressure to make ethical concessions will be enormous. What Anthropic has done this week, giving up $200 million in government contract and all federal business for upholding two red lines, is, to say the least, remarkable.
Whether it is sustainable in the long term… that’s another question.
Have a good week!
——————-
Article sources:
- Anthropic – “Statement: Department of War” (26 Feb 2026)
- CNN – “Anthropic rejects latest Pentagon offer” (26 Feb 2026)
- CNBC – “Anthropic faces lose-lose scenario in Pentagon conflict” (27 Feb 2026)
- TechCrunch – “OpenAI’s Sam Altman announces Pentagon deal” (28 Feb 2026)
- OpenAI – “Our agreement with the Department of War” (28 Feb 2026)
- Axios – “Claude beats ChatGPT in U.S. app downloads” (1 Mar 2026)
- TechCrunch – “Employees at Google and OpenAI support Anthropic’s booth” (27 Feb 2026)
- NPR – “OpenAI announces Pentagon deal after Trump bans Anthropic” (Feb 27, 2026)
- TechCrunch – “OpenAI raises $110B” (27 Feb 2026)
- Council on Foreign Relations – “Anthropic and Pentagon Clash” (feb 2026)
- Fortune – “OpenAI sweeps in to snag Pentagon contract” (28 Feb 2026)
