Llama 3.2 3B Instruct GGUF
Model Overview¶
This is Llamacpp imatrix Quantizations of Llama-3.2-3B-Instruct. Using llama.cpp release b3821 for quantization. All quants made using imatrix option with dataset from here
- Original model: https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
- Model Source: bartowski/Llama-3.2-3B-Instruct-GGUF
- License: llama3.2
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download | Generation Date |
|---|---|---|---|---|---|---|---|---|
| MXFP6 | 2 | 16 | 1 | 32 | 512 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/bartowski/Llama-3.2-3B-Instruct-GGUF/bartowski_Llama-3.2-3B-Instruct-GGUF_qpc_16cores_32pl_512cl_2devices.tar.gz | Download | 05-Feb-2026 |
Run This Model¶
# Download QPC
mkdir -p bartowski/Llama-3.2-3B-Instruct-GGUF
cd bartowski/Llama-3.2-3B-Instruct-GGUF
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>
# Run QPC
python3 -m QEfficient.cloud.execute --model_name bartowski/Llama-3.2-3B-Instruct-GGUF --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128
API Endpoint¶
# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--model bartowski/Llama-3.2-3B-Instruct-GGUF \
--max-model-len <Context Length> \
--max-num-seq <Full Batch Size> \
--max-seq_len-to-capture <Chunking Prompt Length> \
--device qaic \
--block-size 32