Skip to content

Granite 3.2 8b instruct

Model Overview

Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-8B-Instruct, it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.

  • Model Architecture: Granite-3.2-8B-Instruct is a 8-billion parameters language model.
  • Website: Granite Docs
  • Model Source: ibm-granite/granite-3.2-8b-instruct
  • Release Date: Feb 26th, 2025
  • License: Apache 2.0
  • Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download Generation Date
MXFP6 2 16 1 128 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/ibm-granite/granite-3.2-8b-instruct/granite-3.2-8b-instruct_qpc_16cores_128pl_8192cl_1fbs_2devices_mxfp6_mxint8.tar.gz Download 19-Jan-2026

Run This Model

# Download QPC
mkdir -p ibm-granite/granite-3.2-8b-instruct
cd ibm-granite/granite-3.2-8b-instruct
wget <Download URL>
tar xzvf <downloaded filename.tar.gz>

# Run QPC
python3 -m QEfficient.cloud.execute --model_name ibm-granite/granite-3.2-8b-instruct --qpc_path <path/to/qpc> --prompt "# shortest path algorithm\n" --generation_len 128

API Endpoint

# Start REST endpoint with vLLM
VLLM_QAIC_MAX_CPU_THREADS=8 VLLM_QAIC_QPC_PATH=/path/to/qpc python3 -m vllm.entrypoints.openai.api_server \
  --host 0.0.0.0 \
  --port 8000 \
  --model ibm-granite/granite-3.2-8b-instruct \
  --max-model-len <Context Length> \
  --max-num-seq <Full Batch Size>  \
  --max-seq_len-to-capture <Chunking Prompt Length>  \
  --device qaic \
  --block-size 32