
Bengaluru-based Sarvam AI has unveiled two indigenous large language models at the India AI Impact S...
The AMW Read
Sarvam AI updates the frontier model player map by releasing indigenous MoE models, signaling a structural shift toward sovereign AI and reduced dependency on foreign labs in the Indian market.
Bengaluru-based Sarvam AI has unveiled two indigenous large language models at the India AI Impact Summit 2026. Sarvam-30B, with its 32,000-token context window optimized for real-time conversations, and Sarvam-105B, featuring a 128,000-token context window for complex reasoning, were trained on 16 trillion tokens using mixture-of-experts architecture. Both models are being open-sourced and deployed across government and enterprise applications, with the Indus chat app already live. This marks a pivotal shift in India's sovereign AI journey, reducing dependency on foreign models while addressing local linguistic and enterprise needs. The initiative underscores how emerging AI markets are building indigenous infrastructure to ensure data governance, cost efficiency, and population-scale deployment capabilities.

