Turiyam
Category: AI Chips / Semiconductors
A full-stack AI compute infrastructure platform that accelerates AI inference workloads at radically low total cost of ownership (TCO) through custom hardware and integrated software solutions. Turiyam was founded in 2024. The company is led by Sanchayan Sinha. Based in Bengaluru, India. Team size: 10-50. Total funding raised: $4.0M. Latest round: Pre-seed round ($4.0M, Mar 2026). Key investors include ["Ankur Capital","Axilor's Micelio Fund"].
- Founded
- 2024
- Headquarters
- Bengaluru, India
- Team size
- 10-50
- Total funding
- $4.0M
Value proposition
Delivers disruptive TCO for AI inference through software-first architecture combining hybrid SRAM+HBM memory design, compiler-led optimization using reinforcement learning, and open CUDA-free middleware on RISC-V hardware—eliminating vendor lock-in and HBM supply chain constraints.
Products and solutions
["Custom AI Inference Accelerator Chips (Hardware)","Hybrid Memory Architecture (SRAM + HBM)","RL-Based Compiler Stack for Workload Optimization","CUDA-Free Middleware Platform","Full-Stack Hardware-Software Integration"]
Unique value
India's first data center inference semiconductor company pioneering a software-first approach with hybrid memory architecture (SRAM + HBM) to bypass HBM supply constraints, combined with reinforcement learning-based compiler optimization that continuously maps workloads to hardware for maximum throughput and efficiency. Open platform design using CUDA-free middleware and RISC-V ISA avoids NVIDIA ecosystem lock-in.
Target customer
Data centers, cloud service providers, and enterprises running inference-heavy generative AI workloads requiring high-performance, cost-efficient compute infrastructure
Industries served
["AI Infrastructure & Semiconductors","Data Centers & Cloud Computing","Enterprise AI/ML","Generative AI & LLM Deployment","Deep Tech / Hardware Acceleration"]
Technology advantage
Addresses the $100B-$300B inference market by 2030 through three differentiators: (1) Hybrid SRAM+HBM memory balances cost-efficiency vs. bandwidth, avoiding HBM shortages; (2) RL-driven compiler stack dynamically optimizes inference performance-per-watt; (3) Open standards (RISC-V, CUDA-free) reduce TCO and enable enterprise customization. Team has collectively built 30+ chips with experience at Groq, Lightmatter, AMD, and other leading AI hardware companies.
How they differentiate
India's first data center inference semiconductor company pioneering software-first architecture with hybrid SRAM+HBM memory design (avoiding HBM supply constraints), reinforcement learning-based compiler optimization for continuous workload mapping, and open CUDA-free middleware on RISC-V hardware that eliminates vendor lock-in while delivering disruptive total cost of ownership for AI inference workloads.
Main competitors
["NVIDIA (with Groq LPU technology)","Cerebras Systems","Lightmatter"]
Key partnerships
["Ankur Capital (Lead Investor - first semiconductor investment for the deep science VC firm)","Axilor's Micelio Fund (Co-Investor - early-stage deep tech fund)","Early enterprise pilot customers (undisclosed data centers and enterprises in India and overseas)"]
Notable customers
["Select enterprise pilot customers (undisclosed data centers and enterprises in India and overseas)"]
Major milestones
["Raised $4M pre-seed funding from Ankur Capital and Axilor's Micelio Fund (Mar 2026)","Ankur Capital's first semiconductor investment","Founding team collectively built more than 30 chips with experience at Groq, Lightmatter, AMD","Initiated pilot deployments with select enterprises in India and overseas","Positioned as India's first data center inference semiconductor company"]
Growth metrics
Currently deploying products in pilot phase with select enterprises across India and overseas; targeting data centers and enterprises running inference-heavy generative AI workloads
Market positioning
Early-stage deeptech startup positioning as cost-efficient alternative to incumbent GPU solutions, targeting data centers, cloud service providers, and enterprises running inference-heavy generative AI workloads in the projected $100B-$300B inference market by 2030.
Geographic focus
India (Bengaluru headquarters), with pilot deployments across select enterprises and data centers in India and overseas markets; global expansion strategy 'From India, For the World'
Patents and IP
No registered patents publicly disclosed as of March 2026 (company founded 2024, currently in early development stage); likely pursuing trade secret protection for proprietary compiler optimization algorithms and hardware architecture designs
About Sanchayan Sinha
15+ years in semiconductor domain; previously designed chips at large design houses and worked at the cutting edge of AI inference at Groq; experience at Lightmatter (photonic interconnects and co-packaged optics for AI infrastructure), AMD (chip design and architecture), and Essenvia; has built multiple chips throughout career; IIT Kharagpur graduate
Official website: https://turiyam.ai