Whisper large v3 turbo
Model Overview¶
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on >5M hours of labeled data, Whisper demonstrates a strong ability to generalise to many datasets and domains in a zero-shot setting.
- Model Architecture:Whisper large-v3-turbo is a finetuned version of a pruned Whisper large-v3. In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4
- Model Source: openai/whisper-large-v3-turbo
- License: Apache 2.0 license
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download | Generation Date |
|---|---|---|---|---|---|---|---|---|
| MXFP6 | 1 | 8 | 1 | 1 | 150 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/openai/whisper-large-v3-turbo/whisper_large_v3_turbo_qpc_8cores_1pl_150cl_1devices.tar.gz | Download | 05-Feb-2026 |
| MXFP6 | 2 | 8 | 1 | 1 | 150 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/openai/whisper-large-v3-turbo/whisper_large_v3_turbo_qpc_8cores_1pl_150cl_2devices.tar.gz | Download | 05-Feb-2026 |