Llama 3.1 Nemotron Nano 8B v1
Model Overview¶
Llama-3.1-Nemotron-Nano-8B-v1 LLM is a derivative of Meta Llama-3.1-8B-Instruct model. It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling.
Llama-3.1-Nemotron-Nano-8B-v1 offers a great tradeoff between model accuracy and efficiency. The model fits on a single RTX GPU and can be used locally. The model supports a context length of 128K.
This model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using REINFORCE (RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and Online RPO checkpoints. Improved using Qwen.
- Model Architecture: Dense decoder-only Transformer model
- Model Developer: NVIDIA
- Model Release Date: 3/18/2025
- Model Source: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
- License: nvidia-open-model-license
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download | Generation Date |
|---|---|---|---|---|---|---|---|---|
| MXFP6 | 2 | 16 | 1 | 128 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/nvidia/Llama-3.1-Nemotron-Nano-8B-v1/Llama-3.1-Nemotron-Nano-8B-v1_qpc_16cores_128pl_4096cl_1fbs_2devices_mxfp6_mxint8.tar.gz | Download | 21-Jan-2026 |
Run This Model¶
# Download QPC
mkdir -p nvidia/Llama-3.1-Nemotron-Nano-8B-v1
cd nvidia/Llama-3.1-Nemotron-Nano-8B-v1
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>
# Run QPC
python3 -m QEfficient.cloud.execute --model_name nvidia/Llama-3.1-Nemotron-Nano-8B-v1 --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128
API Endpoint¶
# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--model nvidia/Llama-3.1-Nemotron-Nano-8B-v1 \
--max-model-len <Context Length> \
--max-num-seq <Full Batch Size> \
--max-seq_len-to-capture <Chunking Prompt Length> \
--device qaic \
--block-size 32