
Sarvam AI has launched two open-source mixture-of-experts models at India AI Impact Summit 2026: a 3...
The AMW Read
The launch of efficient MoE models by Sarvam AI validates the open-weight/cost-collapse strategy (cross.§B) and advances India's sovereign AI capability (cross.§E) against global frontier labs.
Sarvam AI has launched two open-source mixture-of-experts models at India AI Impact Summit 2026: a 30B parameter model activating only 1B parameters per token with 32K context window, and a 105B model activating 9B parameters with 128K context window trained on 16 trillion tokens. The 105B model reportedly outperforms DeepSeek R1 (600B parameters) and Google's Gemini Flash on key benchmarks while being 6x smaller, demonstrating that efficient architecture design can rival massive models. This marks a significant milestone in India's sovereign AI capabilities and signals a shift toward regionally-optimized, cost-efficient AI infrastructure that challenges the dominance of global tech giants. The open-source release could accelerate enterprise and government AI adoption across India's diverse linguistic landscape.

