Musk v. Altman Trial: Helen Toner's Deposition Details CEO's Firing
- Early Warning Signs: A Board in the Dark
- The Concerns Build: Sutskever Speaks Up
- Inside the Boardroom: A Decision-Making Black Box
- Legal Advice—or the Lack of It
- The Trial Stage: Depositions as Public Autopsy
- Perspective One: The Reformers on the Board
- Perspective Two: The Critics of the Coup
- Perspective Three: The Broader Tech and Governance Lens
- What Comes Next
Musk v. Altman Trial: Helen Toner’s Deposition Details CEO’s Firing Human Human coverage portrays Toner’s deposition as a detailed, credible account of how concerns about Sam Altman’s honesty and a poorly run, insular board process led to his firing at OpenAI. It underscores concrete procedural failures—no input from key stakeholders, minimal legal guidance, and a board surprised by ChatGPT’s launch—as central to understanding both the trial and OpenAI’s governance crisis. @Verge The Musk v. Altman trial has turned into a retroactive board meeting in public, and Helen Toner is the reluctant chair, walking the jury through how one of tech’s most powerful CEOs was ousted without ever getting a proper hearing.
Early Warning Signs: A Board in the Dark
Long before the November 2023 firing, Toner says the board was already struggling to understand what was really happening inside OpenAI.
She testified that she first learned about ChatGPT the way a random user might: by scrolling social media. “Toner says she found out about ChatGPT by seeing screenshots on Twitter.”1 That wasn’t a funny anecdote so much as an indictment of governance. As she put it, she was “used to the board not being very informed about things,” a gap in communication that “caused me to believe that [Altman] was not motivated to help the board perform the oversight role.”1
In other words: the board responsible for overseeing one of the most consequential AI labs on earth only discovered its blockbuster product after the rest of the internet did.
The Concerns Build: Sutskever Speaks Up
By Toner’s account, the road to Altman’s firing didn’t begin with a single scandal, but with a quiet, escalating unease inside the company’s upper ranks.
“Toner is relating how Sam Altman’s firing happened.”2 The catalyst, she says, was a conversation initiated by chief scientist Ilya Sutskever. He “reached out to have a conversation where he expressed serious concerns about Altman.”2 Those concerns weren’t about one explosive incident but a “pattern of behavior” that included issues with “honesty and candor” that ultimately led to the decision to remove him.2
Toner says she had already described some of this pattern in a 2024 podcast, and that her account lines up with testimony from then-interim CEO Mira Murati.2 The picture that emerges from their combined narratives is of a CEO whose relationship with the board had degraded from tense to untenable.
Inside the Boardroom: A Decision-Making Black Box
When Toner’s pre-recorded video deposition rolled in court, jurors were warned it might be a slog. “We are now looking at Helen Toner’s deposition. This should be about an hour. YGR has told the jury that if she sees them falling asleep, she’s going stop the video and have them stand and stretch.”3 The boredom risk, however, masked a bombshell: how little process there actually was behind one of the most scrutinized firings in Silicon Valley history.
“Helen Toner is now talking about the board’s decision-making process.”4 She laid out a stark list of what the board did not do before firing Sam Altman.
According to Toner, neither Sam Altman nor OpenAI president Greg Brockman were permitted to present their perspectives to the board before the vote. “Neither Altman or Brockman had been allowed to tell their side of the story, nor were their HR files pulled by the board. There was no input from Microsoft, or any other investors or customers.”4
In the era of compliance teams, outside counsel, and crisis PR, the OpenAI board instead opted for something closer to a corporate ambush.
Legal Advice—or the Lack of It
If that sounds like the kind of decision that would have been heavily lawyered, at least one observer following the trial isn’t convinced.
Another slice of live coverage captured a blunt takeaway: “The main thing I am taking away from McCauley’s and Toner’s testimony is that the board got really bad advice from whatever lawyers they consulted on the firing Altman thing. I mean, I hope they consulted lawyers. I don’t think that’s come up in the testimony.”5
That comment, referencing both Toner and fellow former board member Tasha McCauley, underscores a central tension of the trial: Was this a courageous act of governance in the face of a wayward CEO, or a spectacular case of fiduciary malpractice?
The Trial Stage: Depositions as Public Autopsy
The Musk v. Altman case—framed as a battle over the “future of OpenAI” in the live-blogs that have become the de facto public record—has turned depositions into serialized drama.4 One entry opens with the dry but telling note: “You may wonder: are we still listening to the video deposition of Tasha McCauley?”5 Yes, the reporter assures, they are—because buried inside these hours of video is the closest thing the public will get to a postmortem on the November coup.
Toner’s segment, in particular, stitches together the internal and external timelines: a board left in the dark about flagship products, a chief scientist raising alarms about the CEO’s honesty, and then, suddenly, a clean decapitation of leadership with almost no paper trail.
Perspective One: The Reformers on the Board
From Toner’s vantage point, the board’s moves were clumsy but fundamentally about oversight. Her discovery of ChatGPT on Twitter becomes Exhibit A in a case against Altman’s transparency: if the board can’t even be trusted with product launches, can it really be trusted with existential AI risk?
Her recounting of Sutskever’s concerns—“serious concerns about Altman” rooted in a “pattern of behavior” around “honesty and candor”2—frames the firing as the reluctant culmination of long-standing doubt.
In this telling, the problem wasn’t that the board acted; it’s that they acted late and, perhaps, with too little procedural rigor.
Perspective Two: The Critics of the Coup
The counter-narrative emerging around the trial doesn’t deny that there were issues with Altman. Instead, it zeroes in on process: a board that didn’t check HR files, didn’t solicit input from Microsoft—the partner that had poured billions into OpenAI—and didn’t even let the CEO speak in his own defense.4
To those critics, the debacle looks less like principled governance and more like a governance vacuum. The live-blogger’s acerbic line about the board getting “really bad advice from whatever lawyers they consulted” and the possibility they might not have consulted any at all has become a shorthand for that skepticism.5
Even the courtroom choreography hints at unease: a judge reminding jurors she’ll make them stand and stretch rather than let them sleep through the granular details of how the most-watched board in tech made one of its biggest calls.3
Perspective Three: The Broader Tech and Governance Lens
Viewed from a distance, the Musk v. Altman trial isn’t just about two tech titans or one very messy firing. It’s a stress test for the entire theory of “mission-driven” AI labs governed by nonprofit boards, charged with managing for humanity while juggling hyper-capitalist incentives.
Here’s what the trial record—such as it is—suggests so far:
- The board felt systematically under-informed about core company activities, including major product launches like ChatGPT.1
- Concerns about the CEO’s candor and honesty were surfacing from senior technical leadership long before the firing.2
- When action finally came, it was executed with astonishing procedural minimalism: no HR review, no key-partner consultation, no opportunity for the accused to respond.4
Against the backdrop of Musk’s lawsuit and Altman’s eventual reinstatement, those facts raise a brutal question for the entire AI sector: if this is how the flagship “safety-conscious” lab handles internal crises, what happens when the stakes involve something more than corporate control—say, a runaway model or a catastrophic deployment mistake?
What Comes Next
For now, the trial continues to unspool, deposition by deposition. “We are now looking at Helen Toner’s deposition,” one live update began, as if announcing just another evidentiary exhibit.3 Instead, Toner’s account has become the spine of a larger story about power, secrecy, and oversight at the heart of the AI boom.
Altman, Musk, and OpenAI will each keep insisting they are the ones trying to save the future. The court, and eventually the public, will have to decide whether what the board did in 2023 looks more like a necessary course correction—or a cautionary tale about how not to govern the engines of the next technological era.
Story coverage
Write a comment