
Sarvam AI just launched two mixture-of-experts models in India - 30B and 105B parameters - with the...
The AMW Read
Sarvam updates the player map with a significant sovereign AI bet in India, while the model efficiency claims signal a shift in scaling debates (cross.§B) and regional fragmentation (cross.§E).
Sarvam AI just launched two mixture-of-experts models in India - 30B and 105B parameters - with the 105B model claiming to outperform DeepSeek R1 (600B parameters) and Google's Gemini Flash on key benchmarks. This is significant because it proves smaller, efficient architectures can compete with massive frontier models, challenging the "scale at all costs" narrative. With $53.8M in funding and a $200M+ valuation, Sarvam represents India's most credible sovereign AI bet yet, trained from scratch for Indian languages and use cases. The systemic implication is clear: regional AI sovereignty is becoming technically feasible, and the global AI landscape is fragmenting into competitive domestic stacks.


