Former DeepSeek core member Ruan Chong joins Yuanrong Qixing as chief scientist, details 40B VLA base model
The AMW Read
Top researcher moving from a leading LLM lab to an autonomous driving startup meaningfully updates the talent landscape in physical AI, and the VLA base model claim advances the segment's technological frontier.
Former DeepSeek core member Ruan Chong joins Yuanrong Qixing as chief scientist, details 40B VLA base model
Ruan Chong, a core member of DeepSeek who contributed to VL, V3, R1, and V4, has joined autonomous driving company Yuanrong Qixing as chief scientist. In his debut at the Beijing Auto Show, he detailed a 40-billion-parameter Vision-Language-Action (VLA) base model that unifies driving, analysis, and evaluation capabilities. He claimed the model reduces model iteration time from over 100 hours to about 10 hours, a 10x efficiency gain.
Why it matters: This move exemplifies the talent migration from pure language model labs to physical AI, a recurring pattern in the substrate. Ruan cited diminishing marginal returns from LLM research and the appeal of solving harder problems in embodied AI. Yuanrong's VLA architecture mirrors the industry convergence toward unified base models for autonomous driving, similar to trends seen at Pony.ai and OpenAI. The efficiency gain addresses a structural force: the capital-intensive data loop in autonomous driving, shifting from reactive data-driven development to proactive data characterization.
Grounded expert take: Ruan's deep LLM expertise from DeepSeek is now applied to physical AI, potentially accelerating Yuanrong's path to L4 autonomy. His focus on a 40B VLA base model with built-in evaluation and analysis components aligns with the pattern of using large models to bootstrap their own improvement—a self-evolving AI approach. The 10x R&D efficiency improvement, if real, could compress the development timeline for autonomous driving, but open questions remain about deployment safety and scalability.

