Details From Mira Murati's Deposition in Musk v. Altman Trial Revealed
- Before the blow‑up: a partnership, not a product plan
- Inside the safety fight: Murati vs. Altman
- November 2023: the firing heard round the AI world
- The flip: from architect of the ouster to architect of the comeback
- Everyone’s villain; everyone’s hero
- What Murati’s story really exposes
Details From Mira Murati’s Deposition in Musk v. Altman Trial Revealed Human Human outlets portray Murati’s deposition as a dramatic, insider narrative of OpenAI’s 2023 leadership crisis, emphasizing her claims that Altman lied about safety review, undermined her, and nearly pushed the company to the brink of collapse. They focus on her shifting stance during the ouster, her intensive coordination with Microsoft, and what her testimony reveals about deep governance and trust failures at OpenAI. @Verge OpenAI’s most infamous weekend is back under the microscope — and this time, Mira Murati’s own words are doing the cutting. The former CTO’s deposition in Musk v. Altman paints a picture of a CEO she says she couldn’t trust, a board she says bungled his firing, and a company she believed was on the brink of collapse.
Before the blow‑up: a partnership, not a product plan
In her video testimony, Murati revisited the early days of OpenAI’s work with Microsoft. When the two companies first started building GPT models together, she told the court there was no expectation that these systems would immediately turn into a commercial bonanza. The collaboration, she said, was about research and capability, not yet about cashing in.1
That origin story matters, because it sets up the central tension that has haunted OpenAI ever since: how fast to move from research lab to profit engine — and who gets to set the rules when safety concerns clash with commercial momentum.
Meanwhile, the broader AI industry was starting to look like a power scramble. As one tech outlet put it, “Everybody wants to rule the AI world,” a line that now reads less like a clever headline and more like a diagnosis.2
Inside the safety fight: Murati vs. Altman
The deposition dives straight into that tension. Murati testified that in at least one key case, she believed Sam Altman lied to her about how OpenAI was handling safety for a powerful new model.
According to Murati, Altman told her that OpenAI’s legal department had determined a new AI model did not need to go through the company’s deployment safety board. Under oath, she said that account simply wasn’t true. When asked directly whether Altman’s statement was accurate, Murati replied: “No.”3
Rather than taking Altman’s word for it, she went to Jason Kwon — then general counsel, now chief strategy officer — and discovered what she described as a “misalignment” between what Kwon was saying and what Altman had told her. She concluded that the safest path was to send the model through the safety board anyway.3
Her broader critique was blunt: her problems with Altman were “completely management related.” She described having “an incredibly hard job to do in an organization that was very complex,” and said she was “asking Sam to lead, and lead with clarity, and not undermine my ability to do my job.”3
Murati further agreed with earlier characterizations of Altman as someone who pitted executives against each other and undermined his own leadership team — a pattern cofounder Ilya Sutskever had laid out in a 52‑page memo to the board, and that former board member Helen Toner had echoed publicly in 2024.3
In other words, by Murati’s account, the trust problem with Altman wasn’t a one‑off misunderstanding; it was a way of operating.
November 2023: the firing heard round the AI world
All of that boiled over in November 2023. The OpenAI board abruptly fired Altman, announcing that he was “not consistently candid in his communications with the board.”4 The move detonated across the tech world, triggering what one reporter later described as “the AI industry’s biggest soap opera moment.”4
At first, Murati looked like the board’s chosen stabilizer. She was installed as interim CEO almost immediately after the firing, thrust from CTO into the spotlight at the exact moment the company needed a public adult in the room.4
Behind the scenes, though, chaos reigned. Staffers, investors, and board members all scrambled to figure out what had really happened. The vague board statement fueled conspiracy theories. On X, employees began an online campaign in support of Altman’s return, many posting the now‑famous line: “OpenAI is nothing without its people.”4
Trial exhibits and testimony now show Murati at the center of that storm. Her deposition “pulled back the curtain” on the ouster, revealing just how central she was to both the initial push against Altman and the whiplash pivot that followed.4
The flip: from architect of the ouster to architect of the comeback
Here’s where the story turns.
Initially, Murati’s concerns and alignment with the board were instrumental in the decision to remove Altman. But in the days that followed, she shifted decisively into the pro‑Altman camp.
As the company veered toward mutiny, Murati concluded that the board’s process couldn’t be trusted. She later said, “The board had not followed a process that could be trusted and it wasn’t transparent with regard to firing Sam.”5 That loss of faith in the board’s legitimacy became the foundation for her push to bring Altman back.
From there, she moved quickly. Murati informed Microsoft’s Kevin Scott that Ilya Sutskever — the same cofounder who had helped orchestrate the firing — had signed a petition to reinstate Altman.5 The signal was unmistakable: even inside the original coup coalition, support was cracking.
At the same time, Satya Nadella met with the OpenAI board, a clear escalation from Microsoft, whose multibillion‑dollar stake gave it extraordinary leverage over the company’s future.5
Murati’s own position was equally clear. She “also wanted Altman back,” the live trial coverage noted.5 Text messages entered into evidence show her coordinating with Altman and Nadella during the crisis as the company tried to find a way out of the spiral.4
The stakes, in her words, were existential. “OpenAI was at catastrophic risk of falling apart” when Altman was fired, she said.5
Within days, the endgame arrived: Murati’s brief tenure as interim CEO ended when outsider Emmett Shear was tapped as a compromise leader, a move that satisfied almost no one. Altman ultimately returned, and the board that had ousted him was largely gone.4
Everyone’s villain; everyone’s hero
Murati’s deposition, and the surrounding reporting, lay out a deeply tangled set of perspectives:
-
Murati on Altman: A brilliant but dangerous manager whose lack of candor and habit of pitting executives against one another made him impossible to rely on — and yet, paradoxically, a leader whose absence pushed the company toward disintegration. She testified she couldn’t trust his words on a crucial safety call, but also fought to reinstall him after concluding the board had acted improperly.35
-
The board on Altman: A CEO who was “not consistently candid” and whose behavior undermined their ability to exercise oversight.34 Their remedy — a sudden firing via vague blog post — may have been procedurally defensible in their eyes but proved politically catastrophic.
-
Murati on the board: A body that failed the basic test of transparent, trustworthy governance. Whatever substantive concerns they had about Altman, she argued, the way they executed his removal was so opaque that it lost her confidence and, ultimately, the company’s.
-
Employees and Microsoft: The rank‑and‑file signaled near‑unanimous support for Altman’s return, while Microsoft escalated from strategic partner to active power broker, with Nadella personally engaging the board and executives trading messages with Redmond as they gamed out the future.45
Hovering over all of it is the Musk v. Altman trial itself, which has turned the internal drama of one AI company into a public case study in how not to run a world‑changing lab.
What Murati’s story really exposes
Taken together, Murati’s testimony and the surrounding exhibits show a company trapped between two instincts: move fast and seize the AI crown, or slow down and build the bureaucratic muscle to keep its own power in check.
On one side are the safety boards, the legal reviews, the insistence that models too powerful to fully understand must at least be procedurally controlled. On the other side are the pressures that come with a near‑monopoly on the world’s most hyped technology — and a market where, as one podcast framed it, “Everybody wants to rule the AI world.”2
Murati’s deposition didn’t just “pull back the curtain” on Sam Altman’s ouster; it exposed how fragile the entire governance experiment at OpenAI really was.4 By her own account, she found herself unable to trust the CEO, unable to trust the board, and yet convinced that losing either could destroy the company.
“Catastrophic risk of falling apart” wasn’t just a dramatic turn of phrase — it was a diagnosis of an institution trying, and failing, to reconcile its mission to build safe AI with the raw politics of ruling the AI world.5
Story coverage
Write a comment