NVIDIA launches open model family for agentic AI

The Nemotron 3 family of open AI models brings a new level of efficiency, accuracy, and transparency in agentic AI development

NVIDIA launches open model family for agentic AI

The Nemotron 3 lineup – comprising Nano, Super, and Ultra – delivers leading performance for multi-agent AI systems, combining advanced reasoning, conversational, and collaborative capabilities. The models leverage a hybrid Mamba-Transformer mixture-of-experts (MoE) architecture, providing best-in-class inference throughput while supporting context lengths of up to 1 million tokens.

Nemotron 3 Nano, the smallest model, is optimized for cost-efficient inference and tasks such as software debugging, content summarization, AI assistant workflows, and information retrieval. Despite possessing 30 billion total parameters, it intelligently activates only about 3 billion per token. With a unique hybrid MoE design, Nano achieves up to 4× higher token throughput than its predecessor and reduces reasoning-token generation by 60%, all while maintaining superior accuracy. Early benchmarks show Nano outperforming comparable open models like GPT-OSS-20B and Qwen3-30B on reasoning and long-context tasks.

Nemotron 3 Super and Ultra extend these capabilities for high-volume collaborative agents and complex AI applications, incorporating innovations such as latent MoE, a hardware-aware expert design that increases model quality without sacrificing efficiency, and multi-token prediction (MTP), which enhances long-form text generation and multi-step reasoning. Both larger models are trained using NVIDIA’s NVFP4 format, enabling faster training and reduced memory requirements.

All Nemotron 3 models are post-trained using multi-environment reinforcement learning (RL), enabling them to handle tasks spanning mathematical and scientific reasoning, competitive coding, instruction following, software engineering, chat, and multi-agent tool use. The models also support granular reasoning budget control at inference time, allowing developers to fine-tune computational resources while maintaining accuracy.

NVIDIA has also released a comprehensive suite of datasets, training libraries, and evaluation tools, including over three trillion tokens of pretraining and reinforcement learning data, the NeMo Gym and NeMo RL open-source libraries, and the Nemotron Agentic Safety Dataset for real-world safety evaluation.

The Nemotron 3 family is designed to empower developers, startups, and enterprises to build specialized AI agents transparently and efficiently. Nano is available today through Hugging Face, NVIDIA NIM microservices, and major cloud and AI platforms including AWS, Google Cloud, and Microsoft Foundry. Super and Ultra are expected to launch in the first half of 2026.

Early adopters such as Accenture, ServiceNow, Perplexity, and Palantir are already integrating Nemotron 3 models into AI workflows for manufacturing, cybersecurity, software development, media, and enterprise operations.

With Nemotron 3, NVIDIA is working on a new standard for efficient, accurate, and open AI models. This will allow developers to scale agentic AI applications from prototype to enterprise deployment while maintaining transparency, cost-efficiency, and state-of-the-art performance.