Amazon Web Services Begins Offering OpenAI Models
- Phase 1: Microsoft’s wall comes down
- Phase 2: AWS pounces within 24 hours
- Phase 3: The bedrock of the new deal — agents and infrastructure
- Phase 4: The money math behind the partnership
- Phase 5: Microsoft’s new posture — from moat to market
- Phase 6: Amazon’s angle — from model neutrality to model marketplace
- Phase 7: OpenAI in the middle — power player or pressured tenant?
- What changes for everyone else
- The new normal: AI without a single gatekeeper
Amazon Web Services Begins Offering OpenAI Models Human Human coverage portrays AWS’s new OpenAI offerings as a watershed moment ending Microsoft’s exclusive reselling rights and rebalancing power among major cloud providers. It highlights OpenAI’s missed revenue and user targets alongside enormous infrastructure commitments, framing the AWS deal as both a competitive realignment and a response to mounting financial and strategic pressure. @TC @TNW Amazon’s cloud just cracked open the AI cartel. With Microsoft’s grip on OpenAI distribution finally loosened, Amazon Web Services moved in almost immediately—turning a quiet contract tweak into a full-on reshuffle of the AI power map.
Phase 1: Microsoft’s wall comes down
For three years, if you wanted OpenAI’s best models in the cloud, there was really only one front door: Microsoft Azure. That was by design. Microsoft had an exclusive right to resell OpenAI’s models and products, locking rivals like Amazon and Google out of directly offering the most powerful GPT systems.
That era ended this week. On Monday, Microsoft and OpenAI amended their blockbuster partnership, dismantling the exclusivity that had defined “the first phase of the AI boom.” Microsoft’s licence to OpenAI’s IP became non‑exclusive, though it still stretches out to 2032, and the company keeps its roughly 20% stake in OpenAI.1
The immediate effect: OpenAI’s crown‑jewel models were suddenly up for grabs across the wider cloud market.
Phase 2: AWS pounces within 24 hours
Amazon didn’t wait.
On Tuesday—just one day after Microsoft agreed to end its exclusive reselling arrangement—Amazon Web Services announced it would begin selling OpenAI’s models to AWS customers.1 Some of OpenAI’s latest models would be available to preview on AWS starting that day, with “the most powerful GPT models arriving within weeks.”1
The speed was not an accident. This move “completes a restructuring” of one of the AI industry’s most consequential deal webs, one that quietly started months earlier when Amazon committed up to $50 billion as part of OpenAI’s $110 billion funding round.1 That round valued OpenAI at a stunning $852 billion and marked Amazon’s largest‑ever investment in any company.1
Internally, AWS framed the move as overdue rather than opportunistic. “It’s something that our customers have asked for, for a really long time,” AWS chief executive Matt Garman told Bloomberg Television.1
In other words: this wasn’t just Amazon muscling into Microsoft’s turf—it was Amazon claiming it was finally doing what its users had been demanding.
Phase 3: The bedrock of the new deal — agents and infrastructure
Underneath the headline about “OpenAI on AWS” sits a more technical but arguably more important plank: the infrastructure and agent layer.
As part of their newly tight relationship, Amazon and OpenAI have “jointly built a Stateful Runtime Environment for agentic AI on Bedrock,” AWS’s managed foundation‑model platform.1 The pitch: instead of just renting raw GPT models, enterprises can deploy long‑running AI agents that remember context, maintain state across interactions, and plug into data and workflows—all on Amazon’s cloud.
The move dovetails with a parallel announcement that “Amazon is already offering new OpenAI products on AWS,” positioning these tools not as experiments but as production‑ready services inside the world’s biggest public cloud.2
In practice, AWS is turning what used to be Microsoft’s key differentiator—direct, preferred access to OpenAI—into just another tile in its sprawling Bedrock catalog, alongside its own models and those from Anthropic, Meta, and others.
Phase 4: The money math behind the partnership
The glamour story here is competition: Amazon vs. Microsoft, Azure vs. AWS, GPT‑4 vs. everyone else. The real story is the bill.
When Amazon wired that up‑to‑$50‑billion investment into OpenAI, OpenAI didn’t just take the cash and walk away. It simultaneously “committed to spending $100 billion on AWS computing power and Trainium chips over eight years, consuming two gigawatts of capacity.”1
That’s an enormous, long‑term bet on AWS infrastructure—and a huge guaranteed revenue stream for Amazon’s cloud, assuming OpenAI can keep paying.
But that’s the catch. According to reporting summarized in The Next Web, the deal lands “as the WSJ reports OpenAI missed revenue and user targets, with $25 billion in expected cash burn against $30 billion revenue, and hundreds of billions in infrastructure commitments to AWS, Azure, and Oracle that assume growth OpenAI has not yet demonstrated.”1
The article puts the central tension bluntly: “The question the deal answers is not whether OpenAI’s models are good enough to sell on rival clouds. The question is whether OpenAI can sell enough of them, anywhere, to justify what it has promised to spend.”1
In other words, putting GPTs on AWS isn’t just about choice; it’s about volume. OpenAI now must find customers across every major cloud to make its commitments remotely sustainable.
Phase 5: Microsoft’s new posture — from moat to market
For Microsoft, ending exclusivity is both a concession and a flex.
The concession is obvious: Azure no longer has the only first‑class pipe into OpenAI for cloud customers. The moat that defined the early generative‑AI land grab is gone.
But strategically, Microsoft gets something more subtle:
- It keeps a long‑running (through 2032) non‑exclusive license to OpenAI’s IP.
- It retains its significant equity stake and strategic alignment.
- It offloads some of the massive future infrastructure burden onto rivals.
If OpenAI is indeed on the hook for “hundreds of billions in infrastructure commitments to AWS, Azure, and Oracle,” then sharing the customer base across multiple clouds works in Microsoft’s favor.1 Rather than fronting the entire compute future of OpenAI alone, it now sits as one of several core suppliers, still tightly integrated but no longer singularly exposed.
Microsoft, in effect, shifts from being the gatekeeper of OpenAI to being its largest but not only distributor and infrastructure partner.
Phase 6: Amazon’s angle — from model neutrality to model marketplace
AWS has long sold itself as the neutral marketplace for AI models, in contrast with Azure’s dependence on OpenAI. By bringing OpenAI into Bedrock, Amazon gets to have it both ways.
On one hand, the company can now tell any enterprise buyer: you can get OpenAI here too. On the other, it can keep pushing its own models and custom silicon (Trainium) as cost‑efficient alternatives.
That duality is woven into the financial engineering of the deal. OpenAI’s $100‑billion commitment to AWS compute and Trainium is not just a revenue story; it’s a validation story. If the world’s most famous AI lab is agreeing to consume two gigawatts of AWS capacity, Amazon can credibly claim its chips and infrastructure are ready for the heaviest AI loads.1
At the same time, the presence of “new OpenAI products on AWS”2 gives Amazon an immediate way to upsell AI to its massive installed base of cloud customers who may have hesitated to jump clouds just to get GPT‑class models.
Phase 7: OpenAI in the middle — power player or pressured tenant?
OpenAI sits at the center of this realignment, but not necessarily in the strongest position.
On paper, it looks like the ultimate winner: a record‑breaking $110‑billion funding round, an $852‑billion valuation, multi‑cloud distribution, and deep partnerships with both Microsoft and Amazon.1
In practice, it’s under intense operational pressure. Missing revenue and user targets while planning for $25 billion in cash burn against $30 billion in revenue is the kind of math that makes investors—and now cloud landlords—nervous.1
By opening up to AWS, OpenAI gets a broader sales front and the possibility of more stable, diversified enterprise demand. But it also locks itself into massive, long‑dated infrastructure deals across three major clouds—AWS, Azure, and Oracle—that “assume growth OpenAI has not yet demonstrated.”1
The pivot to multi‑cloud isn’t a luxury; it’s an obligation.
What changes for everyone else
For enterprises, the most immediate impact is simple: more choice, less lock‑in.
If you’re an AWS customer, you no longer have to weigh migrating workloads to Azure just to get the latest GPTs. If you’re a multi‑cloud shop, you can now architect systems that run OpenAI agents closer to your existing AWS data and services, especially through the new stateful agent runtime on Bedrock.1
For other AI model providers, the move cuts both ways. On the one hand, the arrival of OpenAI on AWS intensifies competition inside Bedrock’s catalog. On the other, it normalizes the idea that flagship proprietary models will sit side‑by‑side with open and partner‑built alternatives inside the same cloud platforms. The game becomes differentiation—on cost, latency, fine‑tuning, and integration—rather than exclusive access.
And for regulators and industry watchers, this week marks a shift from a “winner‑takes‑most” dynamic to something closer to a network of interlocking dependencies: Microsoft and Amazon compete fiercely on cloud while co‑depending on the same AI supplier; OpenAI spreads its bets across multiple infrastructure giants even as those giants become its biggest creditors.
The new normal: AI without a single gatekeeper
The first act of the generative‑AI era was about who got to own the front door. With Microsoft’s exclusivity gone and “Amazon already offering new OpenAI products on AWS,”2 that model is effectively over.
The second act will be messier, more distributed, and far more expensive. Cloud providers will fight to differentiate on tooling, agents, chips, and integrations. OpenAI will fight to grow fast enough to justify its eye‑watering infrastructure tabs. And customers—finally—will get to decide whose cloud, and whose GPT, is actually worth the bill.
1. AWS to sell OpenAI models after Microsoft drops exclusivity, as OpenAI misses revenue targets and faces $100B infrastructure commitments — “AWS will sell OpenAI models after Microsoft ended its exclusive reselling rights, completing a restructuring that began with Amazon’s $50 billion investment in OpenAI’s $110 billion funding round… The question is whether OpenAI can sell enough of them, anywhere, to justify what it has promised to spend.”
2. Amazon is already offering new OpenAI products on AWS — “Amazon is already offering new OpenAI products on AWS.”
Story coverage
Write a comment