Meta Llama 3.1 8B Instruct AWQ INT4
Model Overview¶
This repository is a community-driven quantized version of the original model meta-llama/Meta-Llama-3.1-8B-Instruct which is the BF16 half-precision official version released by Meta AI.
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
- Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
This repository contains meta-llama/Meta-Llama-3.1-8B-Instruct quantized using AutoAWQ from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.
- Model Source: hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4
- License: A custom commercial license
- Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
QPC Configurations¶
Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download |
---|---|---|---|---|---|---|---|
MXFP6 | 4 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.19.6/hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download |