
Sarvam AI launched two indigenous large language models with 30B and 105B parameters, both using mix...
The AMW Read
The launch of high-parameter MoE models by Sarvam AI updates the player map for frontier-adjacent labs and validates the 'Sovereign AI' structural force through state-backed compute subsidies and regional language specialization.
Sarvam AI launched two indigenous large language models with 30B and 105B parameters, both using mixture-of-experts architecture to dramatically reduce inference costs while maintaining competitive performance. The 105B model activates only 9B parameters per token yet outperforms DeepSeek's 600B-parameter R1 on several benchmarks and surpasses Google's Gemini Flash on Indian language tasks at lower cost. This represents India's most significant step toward sovereign AI infrastructure, backed by Rs 99 crore in government GPU subsidies and 4,096 NVIDIA H100 GPUs. The open-source models support 22 Indian languages and offer context windows up to 128K tokens, positioning India to serve AI to 1.4 billion people without dependence on foreign systems.
#SovereignAI #IndiaAI #LLM #ArtificialIntelligence #GenAI #SarvamAI #DeepSeek #OpenSourceAI

