Llama 3.2 11B Vision Instruct
Model Overview¶
The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
- Model Architecture: Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM.
- Model Source: meta-llama/Llama-3.2-11B-Vision-Instruct
- License: llama3.2
QPC Configurations¶
| Precision | SoCs / Tensor slicing | NSP-Cores (per SoC) | Full Batch Size | Chunking Prompt Length | Context Length (CL) | Generated URL | Download | Generation Date |
|---|---|---|---|---|---|---|---|---|
| MXFP6 | 2 | 16 | 1 | 128 | 4096 | https://dc00tk1pxen80.cloudfront.net/SDK1.20.4/meta-llama/Llama-3.2-11B-Vision-Instruct/Llama-3.2-11B-Vision-Instruct_qpc_16cores_128pl_4096cl_2devices_mxfp6_mxint8.tar.gz | Download | 21-Jan-2026 |