
xAI used OpenAI's models to train Grok, Musk admits in court
The AMW Read
Novelty 3: overturns the narrative that Musk's lawsuit was about defending open AI from closed practices, revealing his own use of distillation. Significance 2: segment-level impact on foundation model competitive dynamics and IP debates.
xAI used OpenAI's models to train Grok, Musk admits in court
In a federal courtroom in California on Thursday, Elon Musk testified that his AI startup xAI has used OpenAI's models to improve its own Grok. Asked directly if xAI had distilled OpenAI's technology, Musk said "partly," acknowledging the practice of model distillation — using one large AI model as a "teacher" to pass knowledge to a smaller "student" model. Musk defended it as standard practice, arguing that "generally all the AI companies" do it and that it's common to use other AIs to validate your own.
The admission is striking because Musk is currently suing OpenAI, in part over its alleged anticompetitive behavior. Model distillation has become a flashpoint: OpenAI and Anthropic have accused Chinese firms like DeepSeek of distilling their frontier models, and Google has sought to prevent what it calls "distillation attacks." Yet here, a US-based competitor owned by the plaintiff's CEO admits to the same practice against the defendant. The hypocrisy is hard to ignore.
This case illustrates that distillation is already a structural reality of AI competition. The lines between legitimate training and IP theft remain blurry, and the market lacks clear enforcement mechanisms — especially when practiced by well-resourced labs. Expect more legal and technical friction as rivals increasingly copy each other's capabilities, and more attention from regulators on whether terms-of-service violations constitute real IP injury.

