Nvidia Llama 3.1 Nemotron 70B Instruct HF AWQ INT4
Model Overview¶
This repository is an AWQ 4-bit quantized version of the nvidia/Llama-3.1-Nemotron-70B-Instruct-HF model, which is an NVIDIA customized version of meta-llama/Meta-Llama-3.1-70B-Instruct, originally released by Meta AI.
This model was quantized using AutoAWQ from FP16 down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.
- Model Architecture: Transformer Llama 3.1
- Model Source: ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4
- License: Llama 3.1 Community License Agreement
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | QPC URL | QPC Size | QPC Download | Onnx URL | Onnx Download | Generation Date |
|---|---|---|---|---|---|---|---|---|---|---|---|
| MXFP6 | 2 | 16 | 1 | 128 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.21.2/ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4/ibnzterrell_Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4_qpc_16cores_128pl_4096cl_1fbs_2devices_mxfp6_mxint8.tar.gz | 43GB | Download | https://dc00tk1pxen80.cloudfront.net/SDK1.21.2/ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4/ibnzterrell_Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4_ONNX.tar.gz | Download | 18-Mar-2026 |