Elon Musk Testifies That xAI Used OpenAI Models to Train Grok

During testimony in his lawsuit against OpenAI, Elon Musk confirmed that his AI startup, xAI, used a technique called model distillation to train its Grok AI on OpenAI's models. Musk stated that such practices are common among all AI companies for validation purposes.
Elon Musk Testifies That xAI Used OpenAI Models to Train Grok

Elon Musk Testifies That xAI Used OpenAI Models to Train Grok Human Human coverage emphasizes that Musk’s courtroom admission about distilling OpenAI models to build Grok undercuts his portrayal of himself as a wronged safety advocate, highlighting possible hypocrisy and legal gray areas around using competitors’ APIs. It situates the testimony within a larger narrative of personal grievance, nonprofit mission drift at OpenAI, and escalating ethical and regulatory questions about how commercial AI labs train and deploy powerful systems. @TC @Verge @MIT Technology Review Elon Musk walked into a California courtroom to accuse OpenAI of betraying its founding mission—then under oath admitted his own AI startup has been quietly feeding on OpenAI’s models to build a rival chatbot.

That contradiction is now the central tension in Musk v. Altman: a case that started as a moral crusade over nonprofit ideals and may end as a referendum on how every big AI lab really trains its models.


2015–2023: From benefactor to rival

Back in 2015, Musk cast himself as OpenAI’s patron saint of safe AI, cofounding the lab alongside Sam Altman and Greg Brockman and, by his own account, pumping in tens of millions of dollars expecting nothing in return.

On the stand, Musk framed those early years in starkly personal terms: “I was a fool who provided them free funding to create a startup,” he told the jury, saying he believed he was supporting a nonprofit “developing AI for the benefit of humanity, not to make the executives rich.” He claimed, “I gave them $38 million of essentially free funding, which they then used to create what would become an $800 billion company.”1

OpenAI’s later restructuring into a capped‑profit entity—and the multihundred‑billion‑dollar valuations that followed—became, in Musk’s telling, a betrayal that set the stage for his lawsuit. He is now asking the court to unwind that structure and remove Altman and Brockman from their roles.1

Meanwhile, Musk didn’t just leave and sulk. In 2023, he launched xAI and its edgier chatbot, Grok, positioning it as a “truthmaxxing” alternative to what he casts as overly censored AI. On X, a fan summed up the ethos: “If I was to distill the Elon mindset into one thing, it would be truthmaxxing.” Musk amplified the message with a retweet.2

By early 2026, xAI was not just a side project: it was being lined up for a public listing, folded into SpaceX, at a target valuation of $1.75 trillion—eclipsing even OpenAI’s sky‑high numbers.1

In other words, the man suing OpenAI for drifting toward profit was simultaneously building what could become the most valuable AI company on Earth.


2023–2024: Distillation moves from open secret to open war

While Musk and OpenAI were diverging, the AI industry was quietly converging on a controversial technique: model distillation.

Distillation, in simple terms, lets a smaller “student” model learn from a bigger “teacher” model by querying it at scale and training on the outputs. It’s widely used within companies—OpenAI and Anthropic, for example, distill their own frontier models into cheaper, smaller ones for customers—but it becomes explosive when one lab distills another’s proprietary system.

By 2024, frontier labs were sounding the alarm about what Google would later call “distillation attacks,” describing them as “a method of intellectual property theft that violates Google’s terms of service.”2 Anthropic warned that while “distillation is a widely used and legitimate training method,” it can also let “competitors…acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”2

OpenAI and Anthropic publicly pointed the finger at Chinese outfits like DeepSeek, Moonshot, and MiniMax, accusing them of using distillation to clone Western models into cheap open‑weight competitors.2 OpenAI, Anthropic, and Google joined forces through the Frontier Model Forum to share intelligence and build defenses, trying to block suspicious mass querying of their APIs.3

The subtext, widely assumed inside the industry but seldom admitted out loud: if Chinese firms were distilling U.S. models, why wouldn’t U.S. firms quietly be doing it to each other?


April 30, 2026: Musk says the quiet part out loud

That assumption became fact on April 30, 2026.

In a federal courtroom in California, OpenAI’s lawyers pressed Musk about whether xAI had used OpenAI’s systems to train Grok. The line of questioning zeroed in on distillation.

Asked if he knew what model distillation was, Musk replied that it meant using “one AI model to train another.” When asked whether xAI had distilled OpenAI’s technology, he initially dodged, saying that “generally all the AI companies” do it. When the lawyer demanded a straight answer—is that a yes?—Musk finally conceded: “Partly.” He then insisted, “It is standard practice to use other AIs to validate your AI.”2

Another account captured the same exchange more bluntly: Musk was asked if xAI had used distillation techniques on OpenAI models to train Grok and “asserted it was a general practice among AI companies. Asked if that meant ‘yes,’ he said, ‘Partly.’”3

In the gallery, there were “audible gasps” as Musk acknowledged that “his own AI company, xAI, which makes the chatbot Grok, uses OpenAI’s models to train its own.”1

For months, big labs had cast distillation from China as a kind of technological piracy. Now, one of the loudest critics of OpenAI had confirmed under oath that he was distilling OpenAI, too.


Inside the courtroom: Mission versus motives

Week one of Musk v. Altman quickly turned into a clash of narratives.

Musk’s story is grandiose and apocalyptic. He argues he’s suing to “save OpenAI’s mission to develop AI safely by restoring the company to its original nonprofit structure,” warning that AI could “destroy us all” if left in the hands of profit‑driven executives.1

OpenAI’s side, led by lawyer William Savitt—who once represented Musk and Tesla—tells a colder, more commercial story. Savitt contends Musk was “never committed to OpenAI being a nonprofit” and is now weaponizing the courts as a competitor, not as a safety crusader.1

That line of attack lands harder in light of xAI’s distillation. Musk is suing OpenAI for allegedly abandoning a nonprofit charter while simultaneously:

  • Running xAI as a hyper‑valued, soon‑to‑IPO competitor1
  • Admitting his company uses OpenAI’s own models to train Grok123

The irony is layered. As one analysis noted, distillation “threatens AI giants by undermining the advantage they’ve built by investing in compute infrastructure,” letting others “create models that are nearly as capable on the cheap.”3 Musk, once the benefactor who bankrolled OpenAI, is now using the company’s hard‑won performance as training fuel for his cut‑price rival.

And he’s not shy about marketing that rival. On X, Musk amplified a post praising Grok as the “Best ad for Grok imaginable,” quoting a user saying, “I can’t manipulate this AI into lying to me.”4 In another retweet, a supporter declared that “Elon’s influence was monumental to OpenAI” and that “there simply wouldn’t have been the AI world we have today” without him—framing Musk as the indispensable architect of the very ecosystem he’s now suing.5


The broader fight: Who owns AI’s collective brain?

Strip away the legal theater, and the trial is exposing a deeper industry‑wide fight: who owns the “knowledge” inside large AI models, and what counts as theft when everyone is training on everyone else.

On one side are the frontier labs, spending billions on compute and data, trying to wall off their models with terms of service and security tooling. OpenAI, Anthropic, and Google have “launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts from China,” including blocking systematic scraping and suspicious high‑volume queries.3

On the other side sit upstart labs and open‑source‑oriented outfits, arguing—often quietly, sometimes loudly—that distillation is just the next logical step in an AI field that has always been built on remixing others’ work. As Anthropic itself conceded, “frontier AI labs routinely distill their own models to create smaller, cheaper versions.”2

Musk has now planted himself awkwardly in both camps. In court, he positions himself as a defender of OpenAI’s founding ideals and an apostle of AI safety. In the market, he is acting like any hard‑nosed competitor: poaching employees, launching a rival model, and openly confirming he uses others’ AIs to improve his own.13

That dual identity helps explain why this trial feels less like a clean morality play and more like a messy divorce in a very small, very powerful industry.


What comes next

Legally, Musk wants the court to tear up OpenAI’s corporate structure and oust its leadership—a remedy that could “upend OpenAI’s race toward an IPO at a valuation approaching $1 trillion.”1 Whether a judge will do something that drastic is unclear.

Practically, his testimony has already done something else: it has ripped the veil off distillation among U.S. labs. The practice is no longer just something Western companies accuse Chinese rivals of doing; it’s something one of Silicon Valley’s most famous CEOs now concedes his own company does, too.

For regulators and courts now circling AI—from copyright fights to antitrust cases—that admission is likely to echo far beyond this Oakland courtroom. If “all the AI companies” really are doing it, as Musk says, the question is no longer whether model distillation is happening.

The question is who gets to decide where collaboration ends and copying begins.


1. Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models — Musk testified he was “a fool” who gave OpenAI $38 million in “free funding,” claimed it became an $800 billion company, and admitted xAI uses OpenAI’s models to train Grok.

2. Elon Musk confirms xAI used OpenAI’s models to train Grok — In court, Musk defined model distillation, said “generally all the AI companies” do it, and, when pressed if xAI distilled OpenAI, answered “Partly,” calling it “standard practice to use other AIs to validate your AI.”

3. Elon Musk testifies that xAI trained Grok on OpenAI models — Coverage of Musk’s admission that xAI used distillation on OpenAI models, framed against frontier labs’ efforts to combat such techniques and the broader risk to incumbents’ compute advantage.

4. @elonmusk on X — Retweeted praise for Grok as the “Best ad for Grok imaginable” quoting, “I can’t manipulate this AI into lying to me.”

5. @elonmusk on X — Retweeted a supporter saying “Elon’s influence was monumental to OpenAI” and that without him “there simply wouldn’t have been the AI world we have today.”

6. @elonmusk on X — Shared a clip where a collaborator says, “You’ve gotta do what you think is right,” and sums up “the Elon mindset” as “truthmaxxing.”

Story coverage

Referenced event not yet available nevent1qqsr6…dqu9jse2
Referenced event not yet available nevent1qqs0t…ugy9dp9d
Referenced event not yet available nevent1qqsqt…mc3t0d9u

Write a comment
No comments yet.