
Arcee has launched Trinity Large Thinking, a 400B-parameter sparse MoE LLM with only 13B active per...
The AMW Read
The release of a high-parameter sparse MoE model on a lean budget advances the open-weight/cost-collapse pattern (cross.§B) and positions itself as a Western alternative to Chinese models (cross.§E).
NoveltySignificance
Foundation Models · Player MapScaling LawsGeopolitics
Arcee has launched Trinity Large Thinking, a 400B-parameter sparse MoE LLM with only 13B active per token, built by a 26‑person team on a $20 M budget. The model is Apache 2.0 licensed, runs 2‑3× faster than dense rivals and costs just $0.90 per million output tokens. Its on‑premise availability gives U.S. and Western enterprises a high‑performance alternative to Chinese‑hosted models, reducing supply‑chain and data‑sovereignty risks. 📈🤖
