Skip to content

Meta Llama 3.1 70B Instruct AWQ INT4

Model Overview

This repository is a community-driven quantized version of the original model meta-llama/Meta-Llama-3.1-70B-Instruct which is the FP16 half-precision official version released by Meta AI.

The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

  • Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

This repository contains meta-llama/Meta-Llama-3.1-70B-Instruct quantized using AutoAWQ from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download
MXFP6 4 16 1 128 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 16 16 1 64 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4/qpc_16cores_64pl_8192cl_1fbs_16devices_mxfp6_mxint8.tar.gz Download

Run This Model

# Download QPC
mkdir -p hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
cd hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32