Z.ai has open sourced GLM-4.7, a new large language model built for real-world software development, signalling a shift in open source AI from experimentation to production-grade deployment.
Z.ai has open sourced GLM-4.7, the latest version of its large language model, positioning open-source LLMs as viable, production-ready infrastructure for real-world software development rather than experimental or demo systems.
Designed around practical engineering workflows, GLM-4.7 prioritises long-running task execution, stable and reliable tool calling, and multi-step, agent-based reasoning—capabilities increasingly required in complex, agent-driven development environments. The model represents a clear evolution in how open-source AI systems are engineered for deployment at scale.
Compared with its predecessor GLM-4.6, the new release delivers stronger code generation, improved complex reasoning, and higher agent execution stability. Z.ai also reports more consistent and controllable behaviour over extended tasks, alongside cleaner and more concise language output, addressing a common limitation seen in many open-source models.
To assess real-world performance, Z.ai evaluated GLM-4.7 across 100 practical programming tasks in production-like environments such as Claude Code, spanning front-end, back-end, and command-execution scenarios. The model achieved higher task completion rates and greater operational stability than GLM-4.6, leading to its adoption as the default model for Z.ai’s GLM Coding Plan.
Benchmark results place GLM-4.7 among the strongest open-source models available. It scored 67.5 on BrowseComp and 87.4 on τ²-Bench, marking a new high for open-source systems. In coding benchmarks such as SWE-bench Verified and LiveCodeBench v6, performance approaches Claude Sonnet 4.5, while Code Arena’s blind evaluation ranked it first among open-source models.
With weights released on Hugging Face, availability via the BigModel.cn API, and integration into Z.ai’s full-stack development platform, GLM-4.7 underscores how open-source AI is moving decisively from research benchmarks to deployable, production-grade infrastructure.














































































