Nvidia Llama 3.1 Nemotron 70B Instruct HF AWQ INT4
Model Overview¶
This repository is an AWQ 4-bit quantized version of the nvidia/Llama-3.1-Nemotron-70B-Instruct-HF model, which is an NVIDIA customized version of meta-llama/Meta-Llama-3.1-70B-Instruct, originally released by Meta AI.
This model was quantized using AutoAWQ from FP16 down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.
- Model Architecture: Transformer Llama 3.1
- Model Source: ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4
- License: Llama 3.1 Community License Agreement
QPC Configurations¶
Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download |
---|---|---|---|---|---|---|---|
MXFP6 | 4 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download |