Skip to content

Llama 3.2 1B Instruct

Model Overview

The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.

  • Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
  • Model Release Date: Sept 25, 2024.
  • Model Source: meta-llama/Llama-3.2-1B-Instruct
  • License: llama3.2
  • Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download Generation Date
MXFP6 2 16 1 128 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/meta-llama/Llama-3.2-1B-Instruct/Llama-3.2-1B-Instruct_qpc_16cores_128pl_8192cl_1fbs_2devices_mxfp6_mxint8.tar.gz Download 16-Jan-2026

Run This Model

# Download QPC
mkdir -p meta-llama/Llama-3.2-1B-Instruct
cd meta-llama/Llama-3.2-1B-Instruct
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name meta-llama/Llama-3.2-1B-Instruct --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model meta-llama/Llama-3.2-1B-Instruct \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32