Skip to content

DeepSeek R1 Distill Llama 70B

Model Overview

DeepSeek-R1 and its distilled models represent a significant advancement in reasoning capabilities for LLMs by combining RL, SFT, and distillation. The DeepSeek-R1-Distill-Llama-70B is a distilled version of the Llama3.3-70B-Instruct large language model (LLM), optimized for efficient performance while retaining high-quality generative capabilities and is particularly suited for scenarios where computational efficiency is critical.

  • Model Architecture: DeepSeek-R1-Distill-Llama-70B is based on a transformer architecture, distilled from the larger Llama3.3-70B-Instruct model to reduce computational requirements while maintaining competitive performance. The distillation process ensures that the model retains the core capabilities of the original model, making it suitable for a wide range of text generation tasks.
  • Repository: DeepSeek-V3
  • Model Source: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  • License: MIT License.

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download
MXFP6 4 16 1 128 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.19.6/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/qpc_16cores_128pl_8192cl_1fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 16 8 128 8192 https://qualcom-qpc-models.s3-accelerate.amazonaws.com/SDK1.19.6/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/qpc_16cores_128pl_8192cl_8fbs_8devices_mxfp6_mxint8.tar.gz Download