Skip to content

Gpt oss 20b

Model Overview

OpenAI’s GPT-OSS models (gpt-oss-120b & gpt-oss-20b) are open-weight models designed for powerful reasoning, agentic tasks and versatile developer use cases. GPT-OSS-20B is used for lower latency, and local or specialized use cases.

  • Model Architecture: 21B parameters with 3.6B active parameters. Trained on harmony response format and should only be used with the harmony format as it will not work correctly otherwise.
  • Model Source: openai/gpt-oss-20b
  • License: Apache 2.0 license. Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
  • Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
  • Full chain-of-thought: Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
  • Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
  • Agentic capabilities: Use the models’ native capabilities for function calling, web browsing, Python code execution, and Structured Outputs.
  • Native MXFP4 quantization: The models are trained with native MXFP4 precision for the MoE layer, making the gpt-oss-20b model run within 16GB of memory.

QPC Configurations

Precision SoCs / Tensor slicing NSP-Cores (per SoC) Full Batch Size Chunking Prompt Length Context Length (CL) Generated URL Download
MXFP6 4 8 1 256 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/openai/gpt-oss-20b/qpc_8cores_256pl_8192cl_1bs_4devices_mxfp6_mxint8.tar.gz Download
MXFP6 8 8 1 256 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/openai/gpt-oss-20b/qpc_8cores_256pl_8192cl_1bs_8devices_mxfp6_mxint8.tar.gz Download
MXFP6 1 8 1 256 8192 https://dc00tk1pxen80.cloudfront.net/SDK1.20.2/openai/gpt-oss-20b/qpc_8cores_256pl_8192cl_1bs_1devices_mxfp6_mxint8.tar.gz Download