Skip to content

Qwen3 30B A3B Instruct 2507

Model Overview

Qwen3-30B-A3B-Instruct-2507 is a state-of-the-art large language model from the Qwen3 series, designed to provide strong performance across a wide range of domains including reasoning, coding, multilingual understanding, and long-context comprehension.

Qwen3-30B-A3B-Instruct-2507 brings the following improvements:

  • General Capabilities – Stronger instruction following, logical reasoning, mathematics, science, and coding, with better tool usage integration.

  • Knowledge Coverage – Substantial gains in handling long-tail knowledge and multiple languages.

  • Alignment – Enhanced alignment with user intent for open-ended and subjective tasks, producing more helpful and high-quality responses.

  • Extended Context – Native support for 256K tokens, making it highly capable for long-context tasks such as document understanding and multi-turn reasoning.

  • Efficient Mixture-of-Experts (MoE) – Designed with 128 experts, of which only 8 are active at a time, improving efficiency while maintaining large-model accuracy.

Model Architecture

  • Type: Causal Language Model (CLM)
  • Number of Parameters (Total): 30.5B
  • Number of Parameters (Non-Embedding): 29.9B
  • Activated Parameters: ~3.3B per forward pass (Mixture of Experts)
  • Number of Layers: 48
  • Number of Attention Heads (GQA): 32 for Q and 4 for KV
  • Number of Experts: 128 (8 activated per token)
  • Context Length: 262,144 tokens (256K) natively
  • Model Source: Qwen/Qwen3-30B-A3B-Instruct-2507
  • License: apache-2.0

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download
MXFP6 4 16 1 1 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/Qwen/Qwen3-30B-A3B-Instruct-2507/qpc_16cores_1pl_8192cl_1bs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 4 16 1 1 16384 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/Qwen/Qwen3-30B-A3B-Instruct-2507/qpc_16cores_1pl_16384cl_1bs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 4 16 1 1 32768 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/Qwen/Qwen3-30B-A3B-Instruct-2507/qpc_16cores_1pl_32768cl_1bs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 16 1 1 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/Qwen/Qwen3-30B-A3B-Instruct-2507/qpc_16cores_1pl_8192cl_1bs_8devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 16 1 1 16384 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/Qwen/Qwen3-30B-A3B-Instruct-2507/qpc_16cores_1pl_16384cl_1bs_8devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 16 1 1 32768 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/Qwen/Qwen3-30B-A3B-Instruct-2507/qpc_16cores_1pl_32768cl_1bs_8devices_mxfp6_mxint8.tar.gz Download

Run This Model

# Download QPC
mkdir -p Qwen/Qwen3-30B-A3B-Instruct-2507
cd Qwen/Qwen3-30B-A3B-Instruct-2507
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name Qwen/Qwen3-30B-A3B-Instruct-2507 --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model Qwen/Qwen3-30B-A3B-Instruct-2507 \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32