
DeepSeek's new open-source DeepSeek-V3.2 and DeepSeek-V3.2-Speciale models are directly challenging...
The AMW Read
The article updates the DeepSeek case study (01.§4) by demonstrating parity with proprietary SOTA (GPT-5), validating the open-weight/cost-collapse debate (cross.§B).
NoveltySignificance
Foundation Models · Case StudiesScaling Laws
DeepSeek's new open-source DeepSeek-V3.2 and DeepSeek-V3.2-Speciale models are directly challenging proprietary SOTA by matching or exceeding GPT-5 and Gemini 3 Pro performance on key reasoning benchmarks. DeepSeek-V3.2-Speciale scored 96.0% on AIME 2025, surpassing GPT-5-High's 94.6%, proving that open-source AI has achieved parity at the frontier level, particularly in complex mathematical problem-solving. This strategic release democratizes cutting-edge capability and fundamentally intensifies the open vs. closed model war.


