Llama 3.1 Nemotron Nano 8B v1
Model Overview¶
Llama-3.1-Nemotron-Nano-8B-v1 LLM is a derivative of Meta Llama-3.1-8B-Instruct model. It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling.
Llama-3.1-Nemotron-Nano-8B-v1 offers a great tradeoff between model accuracy and efficiency. The model fits on a single RTX GPU and can be used locally. The model supports a context length of 128K.
This model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using REINFORCE (RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and Online RPO checkpoints. Improved using Qwen.
- Model Architecture: Dense decoder-only Transformer model
- Model Developer: NVIDIA
- Model Release Date: 3/18/2025
- Model Source: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
- License: nvidia-open-model-license
QPC Configurations¶
Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download |
---|---|---|---|---|---|---|---|
MXFP6 | 4 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.19.6/nvidia/Llama-3.1-Nemotron-Nano-8B-v1/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download |