Skip to content

Qwen3 Coder 30B A3B Instruct

Model Overview

Qwen3-Coder-30B-A3B-Instruct model maintains impressive performance and efficiency, featuring the following key enhancements:

  • Significant Performance among open models on Agentic Coding, Agentic Browser-Use, and other foundational coding tasks.
  • Long-context Capabilities with native support for 256K tokens, extendable up to 1M tokens using Yarn, optimized for repository-scale understanding.
  • Agentic Coding supporting for most platform such as Qwen Code, CLINE, featuring a specially designed function call format.

Model Architecture

  • Type: Causal Language Model (CLM)
  • Number of Parameters: 30.5B in total and 3.3B activated
  • Number of Layers: 48
  • Number of Attention Heads (GQA): 32 for Q and 4 for KV
  • Number of Experts: 128
  • Number of Activated Experts: 8
  • Context Length: 262,144 natively.
  • Model Source: Qwen/Qwen3-Coder-30B-A3B-Instruct
  • License: apache-2.0

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download Generation Date
MXFP6 2 16 1 1 4096 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/Qwen/Qwen3-Coder-30B-A3B-Instruct/Qwen3_Coder_30B_A3B_Instruct_qpc_16cores_1pl_4096cl_1bs_2devices_mxfp6_mxint8.tar.gz Download 05-Feb-2026
MXFP6 2 16 1 1 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/Qwen/Qwen3-Coder-30B-A3B-Instruct/Qwen3_Coder_30B_A3B_Instruct_qpc_16cores_1pl_8192cl_1bs_2devices_mxfp6_mxint8.tar.gz Download 05-Feb-2026

Run This Model

# Download QPC
mkdir -p Qwen/Qwen3-Coder-30B-A3B-Instruct
cd Qwen/Qwen3-Coder-30B-A3B-Instruct
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name Qwen/Qwen3-Coder-30B-A3B-Instruct --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model Qwen/Qwen3-Coder-30B-A3B-Instruct \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32