Skip to content

phi 4 AWQ

Model Overview

AWQ quantization is done on the base phi-4 model by stelterlab in INT4 GEMM with AutoAWQ by casper-hansen.

phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.

phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.

Phi-4 model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:

1. Memory/compute constrained environments.
2. Latency bound scenarios.
3. Reasoning and logic.
  • Model Architecture: 14B parameters, dense decoder-only Transformer model. Input will be - Text, best suited for prompts in the chat format and Output will be - Generated text in response to input.
  • Model Release Date: December 12, 2024.
  • Model Source: stelterlab/phi-4-AWQ
  • License: MIT

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download
MXFP6 4 16 1 128 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/stelterlab/phi-4-AWQ/phi-4-AWQ_qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz Download

Run This Model

# Download QPC
mkdir -p stelterlab/phi-4-AWQ
cd stelterlab/phi-4-AWQ
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name stelterlab/phi-4-AWQ --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model stelterlab/phi-4-AWQ \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32