DeepSeek R1 Distill Llama 70B AWQ
Model Overview¶
This quantized model was created using AutoAWQ version 0.2.8 with quant_config: { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
DeepSeek-R1 and its distilled models represent a significant advancement in reasoning capabilities for LLMs by combining RL, SFT, and distillation. The DeepSeek-R1-Distill-Llama-70B is a distilled version of the Llama3.3-70B-Instruct large language model (LLM), optimized for efficient performance while retaining high-quality generative capabilities and is particularly suited for scenarios where computational efficiency is critical.
- Model Architecture: DeepSeek-R1-Distill-Llama-70B is based on a transformer architecture, distilled from the larger Llama3.3-70B-Instruct model to reduce computational requirements while maintaining competitive performance. The distillation process ensures that the model retains the core capabilities of the original model, making it suitable for a wide range of text generation tasks.
- Model Source: Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ
- License: llama3.3
QPC Configurations¶
Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download |
---|---|---|---|---|---|---|---|
MXFP6 | 4 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.19.6/Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download |