DeepSeek R1 Distill Qwen 32B AWQ
Model Overview¶
This quantized model was created using AutoAWQ version 3.2.7.post3 with quant_config: { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
DeepSeek-R1 and its distilled models represent a significant advancement in reasoning capabilities for LLMs by combining RL, SFT, and distillation. The DeepSeek-R1-Distill-Qwen-32B is a distilled version of the Qwen-32B large language model (LLM), optimized for efficient performance while retaining high-quality generative capabilities and is particularly suited for scenarios where computational efficiency is critical.
- Model Architecture: DeepSeek-R1-Distill-Qwen-32B is based on a transformer architecture, distilled from the larger Qwen-32B model to reduce computational requirements while maintaining competitive performance. The distillation process ensures that the model retains the core capabilities of the original model, making it suitable for a wide range of text generation tasks.
- Model Source: Valdemardi/DeepSeek-R1-Distill-Qwen-32B-AWQ
- License: apache-2.0
QPC Configurations¶
Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download |
---|---|---|---|---|---|---|---|
MXFP6 | 4 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/Valdemardi/DeepSeek-R1-Distill-Qwen-32B-A/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download |