Shivon Zilis Testifies in Musk v. Altman Trial

During testimony in the Musk v. Altman trial, Neuralink executive Shivon Zilis discussed her concerns about Sam Altman, which she said she raised with the OpenAI board. These included the board not being notified in advance of ChatGPT's release and a potential deal with nuclear energy company Helion, in which Altman was an investor.
Shivon Zilis Testifies in Musk v. Altman Trial

Shivon Zilis Testifies in Musk v. Altman Trial Human Human coverage portrays Zilis as a conflicted insider whose testimony raises serious doubts about OpenAI’s governance, Altman’s transparency, and Microsoft’s influence, while also revealing her lingering personal sympathy for Altman. It emphasizes her memory inconsistencies, her critiques of unconsulted decisions like ChatGPT’s release and the Helion deal, and her interpretation of Microsoft’s role as near‑controlling evidence that supports Musk’s narrative of mission drift. @Verge Shivon Zilis walked into the Musk v. Altman courtroom as a neural-implant executive and former OpenAI board member. By the time she walked out, she’d sketched a picture of an AI lab where world-changing products launched without board sign‑off, side deals raised conflict‑of‑interest alarms, and Microsoft’s embrace felt less like partnership and more like a chokehold.

From teenage futurist to Musk ally on the stand

Musk’s legal team called Zilis, now a senior figure at Neuralink, to bolster their narrative that Sam Altman steered OpenAI away from its nonprofit mission and into the arms of Big Tech.1 On the stand, Zilis traced her long obsession with AI back to age 13, when she picked up Ray Kurzweil’s The Age of Spiritual Machines and read it “10–15 times,” a book she said opened a “new world” for her.1

After Yale, she told the court, she cycled through IBM, then Bloomberg Ventures, and finally helped launch Bloomberg Beta, an early AI‑focused venture fund.1 That background set up her as more than a friendly Musk witness; she was positioning herself as someone who’s been thinking about AI risk and governance for decades.

The first red flag: ChatGPT blindsides the board

Chronologically, Zilis’ first major break with Altman came not over abstract safety theory, but basic corporate governance. She testified that the non‑profit OpenAI board was not told in advance about the broad public release of ChatGPT.

She said she had “major concerns” about the board learning about one of the most consequential AI launches in history after the fact, and that the “entire board had voiced extreme concern about that whole massive thing happening without any semblance of board communication.”2 According to Zilis, this was “the first concern she raised internally about Altman.”2

Her account supports Musk’s core claim: that OpenAI’s leadership was willing to move fast and make history without answering to the mission‑guarding board that was supposed to keep the for‑profit side in check.3

The Helion deal and a pit‑of‑the‑stomach moment

Zilis’ second major concern, and the one she described with the most visceral language, involved a proposed deal with nuclear energy startup Helion.

Altman and OpenAI president Greg Brockman were both investors in Helion, Zilis said. That alone would be enough to trigger conflict‑of‑interest questions. But the substance of the deal alarmed her even more: Helion “didn’t have an official product yet,” she noted, and the arrangement “felt super out of left field … How is it the case that we want to place [a] major bet on a speculative technology?”4

In a separate description of the same concerns, she told the court she had raised these issues directly with the OpenAI board. First, over the undisclosed ChatGPT launch; second, over Helion. The Helion proposal “raised eyebrows because Altman and Brockman both had investments and the tech was still speculative,” she said, adding that it was “probably the only time where I remember feeling in the pit of my stomach — just being like, I voiced my concerns.”3

For Musk’s side, this was gold: a former insider portraying OpenAI as willing to gamble its future on a technology in which its own leaders were financially entangled.

Microsoft: partner, or puppeteer?

If Helion represented speculative risk, Microsoft represented structural capture. Zilis told the court that she had initially accepted the idea that Microsoft was a powerful but bounded partner. That changed in the shockwave after Altman’s dramatic ouster by the OpenAI board in November 2023.

She recalled Microsoft CEO Satya Nadella describing the relationship at the time by saying Microsoft was “below them, above them, around them” — a phrase she interpreted as signaling “complete control.”5

To Zilis, that was “terrifying because [it] was just not the thing that we had been fighting so hard for.”5 The ouster and its aftermath, she testified, “changed her view of OpenAI’s Microsoft deal.”5

Her alarm extended beyond Big Tech’s reach. She also said she was disturbed that the board members who voted to remove Altman were effectively “expelled,” and that the external law firm OpenAI hired to investigate never, in her view, told the public what had really happened.5 In a trial nominally about contractual promises to Musk, that detail bolstered a broader narrative: an organization willing to bury inconvenient truths.

Behind the scenes: brainstorming an AI power map

The testimony wasn’t all about past grievances. Exhibits and questioning also revealed how Zilis, long before this trial, had imagined radically reshaping the AI landscape in ways that would have bound OpenAI more tightly to Musk.

In one brainstorming exercise, she laid out “possible scenarios for AI,” three of which centered on Tesla AI. One scenario had OpenAI becoming a benefit‑corp subsidiary of Tesla. Another envisioned “Altman as anchor for TeslaAI.”6

But the most striking line in those notes targeted DeepMind cofounder Demis Hassabis: “Find a way to get Demis. Seriously…. Demis really does fanboy hard and I don’t think he’s immoral… just amoral. If he hung around E perhaps it would force him to think about humanity more.”6

Those messages show Zilis not just as a board member worried about process, but as an operator thinking in terms of personalities, loyalties, and gravitational pulls — who belongs in whose orbit, who needs moral anchoring, and who could be captured for “Team Elon.”

A complicated relationship with Sam Altman

If Musk’s lawyers hoped Zilis would paint Altman as a cartoon villain, the record is more tangled.

On substance, she offered Musk precisely what he wanted: testimony that she had “concerns about Altman that she raised with the board of OpenAI,” especially the “broad release of ChatGPT” that wasn’t discussed with the nonprofit board, and the Helion deal that made her deeply uneasy.3

On a personal level, though, Zilis’ communications told a different story. After Altman’s 2023 firing, she texted him a message of pure human concern: “I just wanted to say I hope you are [OK]. I have no idea what’s going on but … I care about you as a person first and foremost. Sending all of my positive vibes your way.”7

That supportive note — read aloud in court — undercut any suggestion that she’d long been an outright Altman antagonist. Instead, she came across as someone who both liked and worried about him, a sympathizer who nonetheless suspected he was steering OpenAI off course.

Memory gaps and courtroom friction

The most viral moments of Zilis’ appearance weren’t about Microsoft, Helion, or even ChatGPT. They were about her memory.

When pressed on a text message she’d sent Musk about OpenAI’s deal with Microsoft — specifically, her characterization that “the structure was not maximum profit and Microsoft was not in control” — Zilis looked at the exhibit and insisted she didn’t actually remember the discussion itself. “It’s not in my neurons,” she said, opting for neuroscience‑adjacent phrasing instead of the standard “I don’t remember.”8 She added that she could see the text on the page, but it still wasn’t “in my brain.”8

That formulation drew skepticism from the OpenAI side. Attorney Sarah Eddy seized on inconsistencies between Zilis’s deposition and her live testimony. Zilis had previously claimed not to recall certain messages, then on the stand said she now did remember them after reviewing documents “numerous times.” Eddy’s response dripped with sarcasm: “Your long-lost memories have since been recovered.”9

In a case already thick with accusations of spin and bad faith, this exchange gave OpenAI’s lawyers an opening to suggest Zilis’ recollections might be shaped more by the needs of Musk’s lawsuit than by clear, contemporaneous memory.

The emerging fault lines

Stacked chronologically, Zilis’ testimony traces a slow, rising tension inside OpenAI that mirrors the broader AI industry:

  1. Early enthusiasm and idealism – A board stacked with true believers, including Zilis, who had spent their careers betting on AI’s upside.1
  2. Governance shock – The surprise ChatGPT launch without prior board notice, which Zilis said prompted “extreme concern” and marked her first internal complaint about Altman’s leadership style.2
  3. Conflict‑of‑interest alarms – The Helion proposal, combining personal investments and “speculative technology,” triggered what she described as a “pit of my stomach” moment and a fear that OpenAI might be captured by its own leaders’ side bets.34
  4. Platform capture fears – The 2023 coup attempt and Nadella’s “below them, above them, around them” comment convinced her that Microsoft had far more practical control over OpenAI than the nonprofit gloss suggested.5
  5. Post‑coup opacity – The expulsion of Altman’s opponents on the board and the opaque outside investigation deepened her sense that the organization had strayed from its lofty promise of serving “humanity.”5

Where Musk’s camp sees a story of betrayal and captured governance, OpenAI’s side is working hard to frame Zilis as a selective and unreliable narrator — sympathetic, perhaps, but far from the impartial conscience of the company.

In that clash, one thing is clear: the fight over OpenAI’s soul is no longer an abstract debate about alignment papers and safety charters. It’s playing out in courtrooms, via text messages, board coups, and, yes, neural metaphors about whether inconvenient memories are really “in your neurons.”

Story coverage

Referenced event not yet available nevent1qqs2j…cq7acy3p
Referenced event not yet available nevent1qqs2s…vq95h5lk
Referenced event not yet available nevent1qqsff…lcw878m3
Referenced event not yet available nevent1qqsdq…dc8d8x2l
Referenced event not yet available nevent1qqs0a…2cy5cqcl
Referenced event not yet available nevent1qqs9z…8qh540sx
Referenced event not yet available nevent1qqszg…4gjexc2v
Referenced event not yet available nevent1qqs0w…tcqh8yh7
Referenced event not yet available nevent1qqsz4…psjjdk34

Write a comment
No comments yet.