Skip to content

DeepSeek R1 Distill Llama 70B

Model Overview

DeepSeek-R1 and its distilled models represent a significant advancement in reasoning capabilities for LLMs by combining RL, SFT, and distillation. The DeepSeek-R1-Distill-Llama-70B is a distilled version of the Llama3.3-70B-Instruct large language model (LLM), optimized for efficient performance while retaining high-quality generative capabilities and is particularly suited for scenarios where computational efficiency is critical.

  • Model Architecture: DeepSeek-R1-Distill-Llama-70B is based on a transformer architecture, distilled from the larger Llama3.3-70B-Instruct model to reduce computational requirements while maintaining competitive performance. The distillation process ensures that the model retains the core capabilities of the original model, making it suitable for a wide range of text generation tasks.
  • Repository: DeepSeek-V3
  • Model Source: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  • License: MIT License.

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download
MXFP6 4 16 1 128 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 4 8 1 128 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/qpc_8cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 16 1 128 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/qpc_16cores_128pl_8192cl_1fbs_8devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 8 1 128 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.20.4/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/qpc_8cores_128pl_8192cl_1fbs_8devices_mxfp6_mxint8.tar.gz Download

Run This Model

# Download QPC
mkdir -p deepseek-ai/DeepSeek-R1-Distill-Llama-70B
cd deepseek-ai/DeepSeek-R1-Distill-Llama-70B
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name deepseek-ai/DeepSeek-R1-Distill-Llama-70B --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model deepseek-ai/DeepSeek-R1-Distill-Llama-70B \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32