DeepSeek R1 Distill Qwen 32B
Model Overview¶
DeepSeek-R1 and its distilled models represent a significant advancement in reasoning capabilities for LLMs by combining RL, SFT, and distillation. The DeepSeek-R1-Distill-Qwen-32B is a distilled version of the Qwen-32B large language model (LLM), optimized for efficient performance while retaining high-quality generative capabilities and is particularly suited for scenarios where computational efficiency is critical.
- Model Architecture: DeepSeek-R1-Distill-Qwen-32B is based on a transformer architecture, distilled from the larger Qwen-32B model to reduce computational requirements while maintaining competitive performance. The distillation process ensures that the model retains the core capabilities of the original model, making it suitable for a wide range of text generation tasks.
- Repository: DeepSeek-V3
- Model Source: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- License: MIT License.
QPC Configurations¶
Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download |
---|---|---|---|---|---|---|---|
MXFP6 | 8 | 16 | 1 | 64 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.18.4/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B7/qpc_16cores_1bs_64pl_4096cl_-1mos_1fbs_8devices_mxfp6_mxint8.tar.gz | Download |
MXFP6 | 8 | 16 | 8 | 64 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.18.4/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B7/qpc_16cores_1bs_64pl_4096cl_-1mos_8fbs_8devices_mxfp6_mxint8.tar.gz | Download |
MXFP6 | 8 | 8 | 1 | 64 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.18.4/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B7/qpc_8cores_1bs_64pl_4096cl_-1mos_1fbs_8devices_mxfp6_mxint8.tar.gz | Download |
MXFP6 | 8 | 8 | 8 | 64 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.18.4/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B7/qpc_8cores_1bs_64pl_4096cl_-1mos_8fbs_8devices_mxfp6_mxint8.tar.gz | Download |