Tencent’s Hunyuan Image 3.0, the largest open source image-generation model with 80 billion parameters, has surpassed Google’s Nano Banana, proving open-source AI can match flagship closed source models.
Tencent Holdings’ AI model Hunyuan Image 3.0 has claimed the top spot in the text-to-image category on LMArena, a major AI model evaluation platform originally started by researchers at the University of California, Berkeley. The model’s rise to the leading position surpasses Google DeepMind’s Gemini 2.5 Flash Image (Nano Banana), previously known for its image editing accuracy and 3D-figurine generation.
Hunyuan Image 3.0 is fully open source and the largest open source image-generation model to date, containing 80 billion parameters, making it highly powerful and scalable. Tencent claims the model is “completely comparable to the industry’s flagship closed-source models”, underlining the competitiveness of open source AI.
Parameters are the variables that encode a model’s intelligence and are adjusted during training. A higher number of parameters generally indicates a more powerful model, although it requires significant computational resources to operate.
Visual demonstrations of Hunyuan Image 3.0’s capabilities include intricate images, such as a Star Ferry-inspired spacecraft traversing a wormhole, showcasing its advanced text-to-image generation abilities and putting it ahead of Nano Banana in practical performance.
Tencent’s achievement demonstrates that open source AI can now rival closed-source flagship models, setting a new benchmark in scale and performance. The success of Hunyuan Image 3.0 highlights the growing impact of open source innovation, offering developers and researchers a powerful, publicly accessible tool while challenging the traditional dominance of proprietary AI technologies.














































































