IBM unveils Granite 4 Nano, a family of ultra-small, open source generative AI models.
IBM Corp. has released Granite 4 Nano, a family of compact open source generative AI models, under the permissive Apache 2.0 licence, marking a major step toward decentralised and privacy-first AI. The models are designed to operate on-device, at the edge, or within browsers, enabling developers to deploy efficient and secure AI applications without cloud dependence.
The open source release allows unrestricted commercial use, modification, and redistribution, strengthening community-led innovation. By democratising access to sub-billion parameter models, IBM positions itself within the open LLM ecosystem alongside contributors like Alibaba’s Qwen and Google’s Gemma, advancing accessible AI for diverse environments.
The Granite 4 Nano family includes four instruct models and their base versions, ranging from 1.5 billion to 350 million parameters. Models such as Granite 4.0 H 1B and Granite 4.0 H 350M feature a hybrid transformer, Mamba architecture, combining Mamba’s hardware efficiency with transformer-based contextual understanding for optimal performance on constrained devices.
Performance benchmarks show Granite 4.0 H 1B leading in instruction-following and function-calling tasks, achieving 78.5 on IFEval and 54.8 on Berkeley’s Function Calling Leaderboard v3, outperforming Qwen3 1.7B and Gemma 3 1B.
IBM’s open source strategy reinforces its broader Granite model family, which scales from Nano variants to enterprise-grade systems. This release strengthens IBM’s AI portfolio with energy-efficient, edge-ready, and privacy-respecting models, reflecting a decisive move toward decentralised AI computing.












































































