
DeepSeek Seeks First External Funding at Over $10 Billion Valuation Amidst Core Talent Drain
The AMW Read
DeepSeek's pivot from an insular funding model to external capital at a $10B+ valuation updates its case-study profile, while the explicit 'talent war' and core R&D departures signal structural instability in the CN frontier lab layer.
DeepSeek Seeks First External Funding at Over $10 Billion Valuation Amidst Core Talent Drain
DeepSeek, a major Chinese AI model company, is in talks with investors for its first external equity financing round. The company aims to raise no less than $300 million at a valuation of at least $10 billion. Previously, the company had declined investment proposals from leading Chinese venture capital firms and tech giants, relying on internal support from quantitative trading firm Jihuant Quant. The move coincides with the reported departure of at least five core R&D members in the second half of 2025, spanning key areas like base models, reasoning, OCR, and multimodal technology.
This development is a significant marker of the mounting pressure on leading AI labs, particularly in China, to secure vast capital for model development while managing intense competition for elite talent. The reported $10B+ valuation signals strong investor confidence in DeepSeek's technical standing, but the concurrent brain drain to giants like Tencent, ByteDance, Xiaomi, and autonomous driving company Yuanrong Qixing exposes the sustainability challenges of its previous insular funding strategy. The specific loss of contributors to key projects like the R1 reasoning model and the V3 model directly impacts core capabilities.
The funding round is a necessary strategic pivot. DeepSeek's reported medium-tier industry salaries were insufficient to retain talent being offered 2-3X premiums, making external capital essential for competitive compensation and the expensive development of its anticipated next-generation V4 model. While adding a 'Fast Mode' and 'Expert Mode' to its platform shows product iteration, the success of the more parameter-intensive V4 model, potentially using a Mega MoE architecture, is contingent on stabilizing its research team. This scenario highlights a broader AI market trend where technical prowess must be matched with robust financial and human resource strategies to survive the scaling race.

