The Pentagon Just Made Anthropic a 'Supply Chain Risk.' It Has Never Done That to a U.S. Company.

Anthropic refused to let Claude be used in autonomous weapons or mass surveillance. The Pentagon blacklisted it, OpenAI took the $200M contract, and the CFO says billions in 2026 revenue are on the line.
The Pentagon Just Made Anthropic a 'Supply Chain Risk.' It Has Never Done That to a U.S. Company.

On February 27, Sam Altman announced that OpenAI had signed a $200 million contract with the Department of Defense. The announcement came hours after President Trump ordered the U.S. government to stop using Anthropic’s products and Defense Secretary Pete Hegseth moved to designate Anthropic a national security risk. The timing was not a coincidence.

Anthropic had held the same $200 million Pentagon contract since July 2025. It lost it for one reason: the company refused to let Claude be used in fully autonomous weapons or domestic mass surveillance. OpenAI, which had quietly removed its military use ban the year before, told the Pentagon its models were available “for all lawful purposes.” The DOD preferred that answer.

Six weeks later, Anthropic’s CFO Krishna Rao told a federal court that the blacklisting could cost the company “multiple billions of dollars” in 2026 revenue. That is not a complaint from a struggling startup. Anthropic just crossed $30 billion in annualized revenue, more than OpenAI’s $25 billion. The pressure here is not about survival. It is about whether an AI lab can say no to the U.S. government and live to invoice the quarter.

The Numbers

Anthropic hit $30 billion in annualized revenue in the first quarter of 2026, up from roughly $4 billion a year earlier, according to figures reported by Bloomberg and Reuters. The company is in the middle of a funding round that values it at around $350 billion. Claude is the #1 app on the Anthropic-owned side of the enterprise AI market, with over 300,000 business customers.

The Pentagon deal was worth up to $200 million over two years. That is less than one percent of Anthropic’s annualized run rate. The structural problem is not the contract itself. It is the downstream effect on everything attached to federal procurement.

In its March 9 complaint, Anthropic told the court that the supply chain risk designation had already caused federal contractors to pause or suspend work, including removing Claude from existing deployments. Private sector partners backed away from deals “amid uncertainty,” Rao wrote. The near-term exposure: hundreds of millions of dollars. The 2026 full-year exposure if the label stands: multiple billions.

The ripple math matters because of how Anthropic’s business works. Claude runs on AWS Trainium and Nvidia GPUs that cost Anthropic roughly $1 billion a month. The company needs to keep raising capital to keep buying compute. Spooked enterprise customers drive away investors. Investors who pull back leave Anthropic short of the compute it needs to stay in the model-quality race against OpenAI, Google, and xAI. That is the real pressure transmission line.

The $200 million that went to OpenAI is almost irrelevant to OpenAI. It is crushing to Anthropic not because of the dollars, but because of what it signals to every other customer who has to think about whether their AI vendor is on a U.S. government blacklist.

Pressure Points

The Designation Has Never Been Used This Way

The Pentagon’s supply chain risk designation was created to block foreign adversaries, Chinese surveillance equipment makers, Russian cybersecurity firms, and companies linked to Iran’s IRGC. It has never been applied to an American company before. Anthropic is the first.

That legal novelty is doing work in two directions. In San Francisco federal court, Judge Rita Lin granted Anthropic a preliminary injunction on March 26, ruling that the administration’s actions looked like “First Amendment retaliation” and that Anthropic was “likely to succeed” on the merits. Seventeen federal agencies named as defendants were told to stop enforcing the ban. But on April 8, the D.C. Circuit Court of Appeals denied Anthropic’s request to pause the DOD’s separate designation. The result: Anthropic is excluded from Pentagon contracts while it is allowed to keep working with other federal agencies. The legal system has not yet agreed with itself.

Every week the designation stays in place, more enterprise customers with federal exposure have to run a compliance check on Claude. Some of them will not wait for the Supreme Court to sort it out. They will switch.

The Guardrail Is the Product

Dario Amodei has spent three years telling investors and researchers that Anthropic’s edge is safety-first alignment. The company’s usage policy bans Claude from being used in autonomous weapons, mass surveillance of U.S. persons, and the generation of biological, chemical, or nuclear weapons information. Enterprise customers pay a premium for that positioning. Regulators in the EU and UK treat Anthropic more gently than its competitors for the same reason.

If Anthropic folds on the Pentagon’s “all lawful purposes” demand, the safety brand collapses. Every EU customer that bought Claude partly because of the AI Act compliance story has to re-evaluate. Every researcher at the company who took a pay cut to work on Constitutional AI has to ask what the point was. The brand is the moat, and the moat gets filled in the moment Anthropic agrees to fully autonomous weapon use.

If Anthropic does not fold, the U.S. government keeps treating it as a pariah, and OpenAI, xAI, and whatever the next DOD-friendly model is keep eating the federal market. Anthropic has written itself into a box where both exits are expensive.

OpenAI Now Has a Moat Anthropic Cannot Cross

OpenAI spent 2025 building a defense business. It removed its military use ban in January 2024, hired more than a dozen former Pentagon and intelligence officials, and opened OpenAI for Government, a dedicated federal sales arm. Altman personally attended defense contractor summits. By February 2026, when the Anthropic slot opened, OpenAI had the relationships, the cleared personnel, and the willingness to serve “all lawful purposes” ready to go.

The federal AI market is conservatively projected at $50 billion per year by 2030, according to Gartner. Once a vendor is embedded in classified workflows, the switching cost for the government becomes enormous. Every quarter OpenAI sits inside the Pentagon unopposed is a quarter where Anthropic’s path back narrows. It is the same dynamic that made Oracle Government the default database for federal agencies for four decades. First mover in national security compute wins for a long time.

What Happens Next

The most likely path: the D.C. Circuit takes up the full appeal during the summer, and a final ruling on the supply chain risk designation arrives before the end of 2026. In the meantime, Anthropic loses the Pentagon market and watches 5 to 15 percent of federal-adjacent enterprise revenue peel off. The company still hits roughly $35 billion to $40 billion in annualized revenue by year end because commercial demand is strong enough to offset the hit. The $350 billion valuation survives, bruised.

The bull case for Anthropic: the Supreme Court or the D.C. Circuit sides with the San Francisco ruling. The supply chain risk designation gets vacated as First Amendment retaliation. The Pentagon is forced to reinstate Anthropic as an eligible vendor, and enterprise customers who had put Claude pilots on hold restart them. The ban backfires and becomes a case study in why singling out one American AI company for refusing weapons work is legally radioactive. Anthropic uses the episode to raise at a $500 billion valuation by Q4.

The bear case: the D.C. Circuit upholds the designation, the San Francisco injunction gets narrowed, and large enterprise customers with federal contracting arms (Lockheed, Palantir partners, the Big Four consultancies) quietly stop renewing Claude. A Fortune 500 AWS customer cites the supply chain risk label as a reason to switch its AI workloads to OpenAI or Gemini, and the story makes the Wall Street Journal. Anthropic’s Q3 growth rate halves from the current pace. The valuation round gets restructured. The safety-first brand stays intact, but the revenue engine behind it stalls, and layoffs follow.

What To Watch

Four specific signals will tell you which path this is on.

First, the D.C. Circuit merits hearing, expected in May or June. If the appeals judges press the DOD on the same “that seems a pretty low bar” questions the San Francisco court asked, Anthropic has a real path back. If the appeals panel defers to Pentagon discretion, the designation likely stays.

Second, Anthropic’s Q2 enterprise net revenue retention. The company does not publish this number, but it leaks through funding round decks. Current NRR is north of 160 percent. If it drops below 140 percent for Q2, federal contractor customers are churning faster than new logos can replace them.

Third, the next big federal AI contract award. If the Department of Energy, the Department of Homeland Security, or a major intelligence agency picks OpenAI or xAI over Anthropic in the next 90 days, the federal market is locking in without Anthropic.

Fourth, any move by Amodei to personally meet with Trump or Hegseth. He has so far refused to negotiate on the autonomous weapons guardrail. If that stance softens publicly, the safety brand starts to crack. If it holds, the legal fight is the only remaining lever.

Fifth, watch for EU reaction. The European Commission has been slow-walking AI Act enforcement against Anthropic partly because of the company’s safety record. If Brussels cites the U.S. legal fight as evidence that American AI labs cannot be trusted to hold the line on autonomous weapons, Anthropic loses a piece of its European premium too.

My Opinion

Anthropic did the right thing, and the right thing is going to be expensive. The company built a business on a specific promise: its models would not be used for fully autonomous killing or mass surveillance of civilians. When the DOD demanded Anthropic drop that promise, the company refused. The Pentagon then invoked a national security tool designed for foreign adversaries against an American company that had done nothing except decline a contract on ethical grounds. That is not a supply chain risk. That is a punishment.

The market is mispricing what this episode means for the AI industry. Right now investors treat the Anthropic-Pentagon fight as a sideshow, a niche regulatory hiccup in an otherwise thriving business. That is wrong. What is actually being tested is whether an AI lab can have an independent policy on weapons use, or whether national security law trumps the First Amendment for any vendor the government decides it wants on tap. If the D.C. Circuit lets this designation stand, every AI lab in the country learns that safety guardrails are a liability. The next Anthropic will not refuse the Pentagon. It will not write the guardrail in the first place.

There is a version of this that ends with Anthropic winning the case, raising at $500 billion, and becoming the textbook example of why safety-first positioning is a durable moat. There is another version where Anthropic wins the legal fight but loses the customer fight, because by the time the courts sort it out, OpenAI has absorbed the federal market and Constitutional AI becomes a graduate school case study. I think the second version is more likely than the current valuation round suggests. Refusing the Pentagon was principled. It was also, in the short run, a mistake you cannot fully take back.


Read more forecasts and analysis at humai.blog. Subscribe to stay ahead of the biggest trends in AI and tech.


Write a comment
No comments yet.