Skip to content

Llama 3.1 Nemotron Nano 8B v1

Model Overview

Llama-3.1-Nemotron-Nano-8B-v1 LLM is a derivative of Meta Llama-3.1-8B-Instruct model. It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling.

Llama-3.1-Nemotron-Nano-8B-v1 offers a great tradeoff between model accuracy and efficiency. The model fits on a single RTX GPU and can be used locally. The model supports a context length of 128K.

This model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using REINFORCE (RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and Online RPO checkpoints. Improved using Qwen.

  • Model Architecture: Dense decoder-only Transformer model
  • Model Developer: NVIDIA
  • Model Release Date: 3/18/2025
  • Model Source: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
  • License: nvidia-open-model-license

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) QPC URL QPC Size QPC Download Onnx URL Onnx Download Generation Date
MXFP6 2 16 1 128 4096 https://dc00tk1pxen80.cloudfront.net/SDK1.21.2/nvidia/Llama-3.1-Nemotron-Nano-8B-v1/nvidia_Llama-3.1-Nemotron-Nano-8B-v1_qpc_16cores_128pl_4096cl_1fbs_2devices_mxfp6_mxint8.tar.gz 9.6GB Download https://dc00tk1pxen80.cloudfront.net/SDK1.21.2/nvidia/Llama-3.1-Nemotron-Nano-8B-v1/nvidia_Llama-3.1-Nemotron-Nano-8B-v1_ONNX.tar.gz Download 18-Mar-2026