Nvidia Unveils Nemotron 3 As Open Source Backbone For Agentic AI

0
1
Open Source Takes Centre Stage As Nvidia Launches Nemotron 3
Open Source Takes Centre Stage As Nvidia Launches Nemotron 3

Nvidia has launched Nemotron 3 as an open source, open infrastructure AI platform.

Nvidia has unveiled Nemotron 3, a family of open models designed to power the next generation of agentic AI, positioning the release as open infrastructure rather than a hosted AI service. The company says agentic AI, where models cooperate, reason over long contexts, and execute complex tasks, requires infrastructure that is open, customisable, and enterprise-owned.

As part of the release, Nvidia is open-sourcing Nemotron 3 model weights, most of its training data, and key reinforcement learning tools, including NeMo Gym, NeMo RL, and NeMo Evaluator, all available on GitHub and Hugging Face. The company is also releasing 3 trillion tokens of pre-training, post-training, and RL data, alongside a real-world telemetry dataset for safety evaluation. Open weights can be downloaded and run locally at no cost.

“This is Nvidia’s response to DeepSeek disrupting the AI market. They’re offering a ‘business-ready’ open alternative with enterprise support and hardware optimization,” said Wyatt Mayham, Northwest AI Consulting.

Nemotron 3 is built on a hybrid latent mixture-of-experts architecture and launches with three variants. Nemotron 3 Nano, available now, features 30 billion parameters, a 1-million-token context window, and is optimised for efficiency-focused tasks such as retrieval, summarisation, debugging, and AI assistants. It is available via Hugging Face, major cloud platforms, and as a pre-built Nvidia NIM microservice.

According to Artificial Analysis, Nano is the most efficient model in its size class. Nvidia claims a 4× throughput improvement over Nemotron 2 Nano and a 60% reduction in reasoning-token generation, lowering inference costs for multi-agent systems.

“Nvidia isn’t trying to compete with OpenAI or Anthropic’s hosted services — they’re positioning themselves as the infrastructure layer for enterprises that want to build and own their own AI agents,” Mayham said.

With upcoming Super and Ultra models planned for 2026, Nvidia is betting that openness, ownership, and deployment flexibility, rather than closed APIs, will define enterprise agentic AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here