
Sarvam AI launched two indigenous LLMs—30B and 105B parameter models—with the larger model reportedl...
The AMW Read
The article validates the 'Sovereign AI' pattern (Frame 2 of Segment 01) by demonstrating a state-supported entrant using localized data and subsidized compute to challenge Big Tech dominance.
Sarvam AI launched two indigenous LLMs—30B and 105B parameter models—with the larger model reportedly outperforming DeepSeek R1 (600B parameters) and Google's Gemini Flash on key benchmarks while excelling in 22 Indian languages. Using mixture-of-experts architecture, the 105B model activates only 9B parameters per token, achieving dramatic cost efficiency critical for India's 1.4 billion population. As the largest beneficiary of India's Rs 10,000 crore AI Mission with Rs 99 crore in GPU subsidies and 4,096 NVIDIA H100s, Sarvam validates the sovereign AI strategy emerging nations are pursuing to reduce Big Tech dependence. This signals that efficient, localized AI models—not just massive frontier systems—will define the next wave of global AI competition.


