DeepSeek R1 Distill Llama 70B AWQ
Model Overview¶
This quantized model was created using AutoAWQ version 0.2.8 with quant_config: { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
DeepSeek-R1 and its distilled models represent a significant advancement in reasoning capabilities for LLMs by combining RL, SFT, and distillation. The DeepSeek-R1-Distill-Llama-70B is a distilled version of the Llama3.3-70B-Instruct large language model (LLM), optimized for efficient performance while retaining high-quality generative capabilities and is particularly suited for scenarios where computational efficiency is critical.
- Model Architecture: DeepSeek-R1-Distill-Llama-70B is based on a transformer architecture, distilled from the larger Llama3.3-70B-Instruct model to reduce computational requirements while maintaining competitive performance. The distillation process ensures that the model retains the core capabilities of the original model, making it suitable for a wide range of text generation tasks.
- Model Source: Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ
- License: llama3.3
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download | Generation Date |
|---|---|---|---|---|---|---|---|---|
| MXFP6 | 4 | 16 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download | |
| MXFP6 | 8 | 16 | 1 | 128 | 8192 | https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ/qpc_16cores_128pl_8192cl_1fbs_8devices_mxfp6_mxint8.tar.gz | Download | |
| MXFP6 | 4 | 8 | 1 | 128 | 8192 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ/qpc_8cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz | Download | |
| MXFP6 | 8 | 8 | 1 | 128 | 8192 | https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ/qpc_8cores_128pl_8192cl_1fbs_8devices_mxfp6_mxint8.tar.gz | Download | |
| MXFP6 | 2 | 16 | 1 | 128 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ/DeepSeek-R1-Distill-Llama-70B-AWQ_qpc_16cores_128pl_4096cl_1fbs_2devices_mxfp6_mxint8.tar.gz | Download | 19-Jan-2026 |
Run This Model¶
# Download QPC
mkdir -p Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ
cd Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>
# Run QPC
python3 -m QEfficient.cloud.execute --model_name Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128
API Endpoint¶
# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--model Valdemardi/DeepSeek-R1-Distill-Llama-70B-AWQ \
--max-model-len <Context Length> \
--max-num-seq <Full Batch Size> \
--max-seq_len-to-capture <Chunking Prompt Length> \
--device qaic \
--block-size 32