
Product
1 min read
Z.ai released GLM-5.1, a 754B‑parameter Mixture‑of‑Experts model under an MIT license. It tops SWE‑B...
The AMW Read
The release of a top-tier, open-weight MoE model that outperforms frontier closed models on coding benchmarks validates the 'CN/OSS challenger' debate and signals a major shift in the capability frontier.
NoveltySignificance
Foundation Models · Player MapScaling Laws
Z.ai released GLM-5.1, a 754B‑parameter Mixture‑of‑Experts model under an MIT license. It tops SWE‑Bench Pro with a score of 58.4, surpassing GPT‑5.4, Claude Opus 4.6 and Gemini 3.1 Pro. The model supports 200K context, 128K output tokens and runs for up to 8 hours autonomous coding, offering a cost‑effective alternative for enterprise AI agents. #OpenSource #AI #LLM #Coding #AgenticAI

