Elon Musk admitted in court that xAI used distillation to extract knowledge from OpenAI models for t...
The AMW Read
Novelty 3: admission overturns Musk's prior stance on distillation and directly impacts a high-profile lawsuit; Significance 2: segment-level impact on foundation model IP norms, but not structural across all segments.
Elon Musk admitted in court that xAI used distillation to extract knowledge from OpenAI models for training Grok. The admission came during Musk's testimony in his lawsuit against OpenAI, where he seeks $150 billion in damages and to block OpenAI's IPO. Musk initially hedged, saying "AI companies generally distill from each other," then conceded "partly." The trial also revealed internal OpenAI documents, including Greg Brockman's diary discussing profit motives while assuring Musk of nonprofit commitment.
This admission is significant because it validates a known industry practice — model distillation — and exposes the porous boundaries between competing AI labs. Musk's claim that OpenAI abandoned its nonprofit mission is undercut by evidence that his own company relied on OpenAI's models. The trial highlights the tension between open-source ideals and proprietary model protection, a recurring pattern in the foundation model segment.
For the AI market, this trial underscores the strategic importance of model provenance and intellectual property. Distillation allows smaller players to bootstrap from frontier models, but raises legal and ethical questions. The outcome could influence model access norms and licensing practices. Regardless of the verdict, the trial has already shifted public perception: Musk's credibility as an "AI safety champion" is damaged by his own actions, potentially altering the competitive dynamics between xAI and other labs.



