Skip to main content
Back to News
DeepSeek releases new AI model V4 with drastically reduced costs
Product
2 min read
CN

DeepSeek releases new AI model V4 with drastically reduced costs

The AMW Read

Novelty 2: updates known player with significant capability advance but fits existing pattern. Significance 3: cross-segment impact on foundation model economics and geopolitical compute dynamics.
NoveltySignificance
Foundation Models · Case StudiesGeopoliticsCompute Economics
DeepSeek AI
DeepSeek AI

Foundation Models / LLMs

View Company Profile

DeepSeek releases new AI model V4 with drastically reduced costs

Chinese AI startup DeepSeek released DeepSeek-V4, available in two variants: V4-Pro (1.6 trillion parameters) and V4-Flash (284 billion parameters). The model supports a context length of one million tokens, on par with Google's Gemini, and boasts drastically reduced compute and memory costs. It is optimized for popular AI Agent products such as Claude Code and can run on Huawei Ascend SuperPoD chips. The company also announced a preview version of the open-source model.

Why it matters: V4's arrival marks an inflection point in the cost-performance curve for foundation models. By combining an ultra-long context (matching top US labs) with optimized hardware that runs on sanctioned Chinese chips, DeepSeek challenges the hyperscaler-distribution moat that US labs have relied on. This update to the canonical DeepSeek case study shows the capital-compression arc moving into long-context reasoning, potentially accelerating commoditization of a key capability. The open-source release also deepens the recurring pattern of Chinese firms using open-weight strategies to drive adoption despite geopolitical headwinds.

Expert take: iiMedia founder Zhang Yi called the release a genuine inflection point, stating that ultra-long context support 'is expected to move beyond high-end research labs and enter mainstream commercial applications.' Analyst Max Liu noted that if V4 matches Western labs, the shock value equals DeepSeek's original Sputnik moment. The model's optimization for Huawei chips also signals deepening integration between Chinese AI and domestic hardware, sidestepping US export controls.

#DeepSeek#V4#long context#cost reduction#Huawei Ascend#open source

How This Connects

Based on Foundation Models · Case Studies

  1. 1d agoAnthropic secures up to $40B from Google, $25B from Amazon in capital blitz.Anthropic
  2. 2d agoDeepSeek releases new AI model V4 with drastically reduced costs · THIS ARTICLE
  3. 2d agoOpenAI releases GPT-5.5, topping all benchmarks and surpassing Opus 4.7OpenAI
  4. 2d agoDeepSeek unveils V4 model using Huawei chips, undercuts US labs on price.DeepSeek
  5. 3d agoDeepSeek V4 launches on Huawei Ascend, opens first funding round amid talent exodusDeepSeek
  6. 2w agoMistral AI met Samsung's device‑solutions head in Seoul, seeking high‑bandwidth memory (HBM) for its...Mistral AI

Related News

More news from DeepSeek AI

Stay updated with the latest news and announcements from DeepSeek AI.

View all DeepSeek AI news

Discover AI Startups

Explore 2,000+ AI companies with VC-grade analysis, funding data, and investment insights.

Explore Dashboard