DeepSeek V4 launches on Huawei Ascend, opens first funding round amid talent exodus
The AMW Read
DeepSeek accepting external funding and switching to Huawei Ascend overturns its §4 case-study narrative of self-funding and Nvidia-only, while signaling compute decoupling and talent war dynamics.
DeepSeek V4 launches on Huawei Ascend, opens first funding round amid talent exodus
DeepSeek has released V4, a MoE language model with 1.6T total parameters and 1M context window, trained on Huawei Ascend rather than Nvidia GPUs after a major training failure in mid-2025. The model remains text-only due to compute and cash constraints. DeepSeek opened its first external fundraise in April 2026, spurred by the need to fund larger models and retain talent after key R1 and LLM contributors were poached by ByteDance and Tencent. The company is also building product teams and exploring agent features.
Why it matters: DeepSeek's shift from Nvidia to Huawei Ascend signals accelerating compute supply chain decoupling, while its first external capital raise breaks a long-standing self-reliance doctrine. This validates the capital-compression arc for Chinese foundation model labs: even the most efficient open-weight pioneer must eventually compete for GPU allocation and top talent through investor capital. The move also reopens the debate on open-source sustainability, as DeepSeek's non-profit ethos yields to commercialization pressure.
Industry watchers should treat DeepSeek's acceptance of outside investment as a structural inflection point. The company had been a canonical case study for the 'fastest ARR ramp without marketing spend' pattern and the 'context-engineering moat' via sparse attention. Now it must navigate the hyperscaler distribution dynamics it previously sidestepped, while rivals like Alibaba's Qwen and Tencent's Hunyuan race to release larger models. The outcome will determine whether China's AI ecosystem consolidates around a few well-funded labs or fragments under compute constraints.
