Phi 4
Model Overview¶
phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.
phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
Phi-4 model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:
1. Memory/compute constrained environments.
2. Latency bound scenarios.
3. Reasoning and logic.
- Model Architecture: 14B parameters, dense decoder-only Transformer model. Input will be - Text, best suited for prompts in the chat format and Output will be - Generated text in response to input.
- Model Release Date: December 12, 2024.
- Model Source: microsoft/phi-4
- License: MIT
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | QPC URL | QPC Size | QPC Download | Onnx URL | Onnx Download | Generation Date |
|---|---|---|---|---|---|---|---|---|---|---|---|
| MXFP6 | 2 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.21.2/microsoft/phi-4/microsoft_phi-4_qpc_16cores_128pl_8192cl_1fbs_2devices_mxfp6_mxint8_ccl.tar.gz | 15GB | Download | https://dc00tk1pxen80.cloudfront.net/SDK1.21.2/microsoft/phi-4/microsoft_phi-4_ONNX.tar.gz | Download | 17-Mar-2026 |