Skip to content

Mistral 7B Instruct v0.1

Model Overview

The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download Generation Date
MXFP6 4 16 16 128 4096 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/mistralai/Mistral-7B-Instruct-v0.1/qpc_16cores_128pl_4096cl_16fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 4 16 1 128 4096 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/mistralai/Mistral-7B-Instruct-v0.1/qpc_16cores_128pl_4096cl_1fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 4 16 8 128 4096 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/mistralai/Mistral-7B-Instruct-v0.1/qpc_16cores_128pl_4096cl_8fbs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 2 16 1 128 4096 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/mistralai/Mistral-7B-Instruct-v0.1/Mistral-7B-Instruct-v0.1_qpc_16cores_128pl_4096cl_1fbs_2devices_mxfp6_mxint8.tar.gz Download 21-Jan-2026

Run This Model

# Download QPC
mkdir -p mistralai/Mistral-7B-Instruct-v0.1
cd mistralai/Mistral-7B-Instruct-v0.1
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name mistralai/Mistral-7B-Instruct-v0.1 --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model mistralai/Mistral-7B-Instruct-v0.1 \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32