NVIDIA has unveiled the Llama Nemotron family of open AI reasoning models, designed to help developers and businesses build advanced AI agents capable of solving complex tasks. These new models offer enhanced reasoning abilities and can function independently or as part of a team, making them a strong foundation for AI-driven platforms.
The Llama Nemotron models are based on the Llama framework but have undergone extensive post-training to improve their ability to handle multistep maths, coding, reasoning, and decision-making tasks. NVIDIA claims these improvements increase accuracy by up to 20% compared to base models while also optimising inference speed by five times compared to other leading open reasoning models. This enhanced efficiency allows businesses to run AI reasoning tasks more effectively while reducing operational costs.
Several leading tech companies, including Accenture, Amdocs, Atlassian, Box, Cadence, CrowdStrike, Deloitte, IQVIA, Microsoft, SAP, and ServiceNow, are working with NVIDIA on integrating these new AI reasoning models into their platforms.
“Reasoning and agentic AI adoption is incredible,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s open reasoning models, software and tools give developers and enterprises everywhere the building blocks to create an accelerated agentic AI workforce.”
Enhanced accuracy and deployment options for enterprises
NVIDIA’s Llama Nemotron model family is available in three different sizes—Nano, Super, and Ultra—each optimised for different deployment needs:
- Nano: Offers the highest accuracy for use on PCs and edge devices.
- Super: Balances high accuracy and throughput on a single GPU.
- Ultra: Designed for maximum agentic accuracy in multi-GPU server environments.
To further refine these models, NVIDIA used its DGX Cloud to conduct post-training on curated synthetic data generated by NVIDIA Nemotron and other open models. Additional curated datasets were co-developed with partners to improve the models’ reliability and effectiveness.
The datasets, tools, and optimisation techniques used in post-training will be openly available, allowing enterprises to create their own customised reasoning models tailored to their needs.
AI platforms integrating NVIDIA’s reasoning models
Several industry leaders are incorporating NVIDIA’s Llama Nemotron models to improve their AI platforms:
- Microsoft: Integrating the models and NVIDIA NIM microservices into Microsoft Azure AI Foundry, expanding its AI model catalogue and enhancing services such as Azure AI Agent Service for Microsoft 365.
- SAP: Using the models to improve SAP Business AI solutions and Joule, its AI copilot, while leveraging NVIDIA NIM and NeMo microservices to enhance SAP ABAP programming language models. “We are collaborating with NVIDIA to integrate Llama Nemotron reasoning models into Joule to enhance our AI agents, making them more intuitive, accurate and cost effective,” said Walter Sun, global head of AI at SAP.
- ServiceNow: Developing AI agents with Llama Nemotron models to improve enterprise productivity across various industries.
- Accenture: Deploying the models on its AI Refinery platform, enabling clients to build industry-specific AI agents to drive business transformation.
- Deloitte: Planning to integrate Llama Nemotron models into its Zora AI platform, designed to support AI agents that replicate human decision-making with industry-specific knowledge.
AI tools for advanced reasoning and enterprise adoption
Developers can access NVIDIA’s Llama Nemotron reasoning models through NVIDIA AI Enterprise, which offers a suite of tools for building agentic AI systems. These include:
- NVIDIA AI-Q Blueprint: A framework to connect AI agents with knowledge sources, enabling autonomous reasoning and decision-making.
- NVIDIA AI Data Platform: A reference design for enterprise infrastructure supporting AI query agents.
- NVIDIA NIM Microservices: Designed to optimise inference for AI applications, ensuring continuous learning and real-time adaptation.
- NVIDIA NeMo Microservices: Providing enterprise-grade tools to establish and maintain a data flywheel, allowing AI agents to learn from user interactions.
Availability
The NVIDIA Llama Nemotron Nano and Super models, along with NIM microservices, are available as a hosted API from build.nvidia.com and Hugging Face. Developers in the NVIDIA Developer Program can access these tools for free for research and testing purposes.
For production deployment, enterprises can use the models with NVIDIA AI Enterprise on cloud and data centre infrastructure. NVIDIA NeMo microservices will be made publicly available at a later date.
Additionally, the NVIDIA AI-Q Blueprint is expected to launch in April, while the NVIDIA AgentIQ toolkit is already available on GitHub.