Z.AI unveils MIT-licensed open source GLM5 on Hugging Face, delivering near-proprietary performance for autonomous coding and complex engineering tasks, as closed vendors scramble to iterate.
Open source artificial intelligence has gained a heavyweight contender. Z.AI has launched GLM5, a 744-billion-parameter model released under the permissive MIT licence and hosted on Hugging Face, positioning community-led development directly against proprietary large language models.
Designed for autonomous coding and agentic engineering, GLM5 targets real-world software workflows rather than general chat. The system supports debugging, refactoring, repository navigation, and complex workflow automation, aided by 40 billion active parameters per task, training on 28.5 trillion tokens, and a 200,000-token context window that enables sustained reasoning across large codebases.
Benchmark results suggest the open source approach is closing the performance gap. GLM5 scored 77.8% on Software Engineering Bench and 56.2% on Terminal Bench 2.0, leading open source peers. It ranked highest on Vending Bench 2 for long-term operational capability and achieved 50.4% on Humanity’s Last Exam, reportedly competing with or surpassing proprietary systems from OpenAI and Anthropic, as well as Gemini 3 Pro–class models.
The release signals a broader shift towards accessible, lower-cost, community-built AI that rivals closed alternatives in capability and scale.
Meanwhile, a possible Gemini 3.1 Pro update has surfaced in third-party benchmarks, though Google has not confirmed specifications. The unverified sighting points to incremental tuning rather than a major leap.
Together, these developments underline a tightening race where open source is no longer catching up but competing head-on.












































































