Apriel-1.6-15B-Thinker
Frontier-level multimodal reasoning in a compact, token-efficient model

About model
57
Matches Qwen-235B & DeepSeek-v3.2 Exp
88%
Elite mathematical reasoning
30%
Better reasoning efficiency vs. v1.5
- Frontier Performance: 57 on AA Index, matching models 15x its size
- Token Efficiency: 30% fewer reasoning tokens than Apriel-1.5
- Enterprise Ready: 69% Tau2 Bench Telecom, 69% IFBench
- Single GPU: 15B parameters fit entirely on one GPU
Model | AIME 2025 | GPQA Diamond | HLE | LiveCodeBench | MATH500 | SWE-bench verified |
|---|---|---|---|---|---|---|
Apriel-1.6-15B-Thinker | 88.0% | 73.0% | 10.0% | 81.0% | 23.0% | Related open-source models | Competitor closed-source models |
90.5% | 34.2% | 78.7% | ||||
83.3% | 24.9% | 99.2% | 62.3% | |||
76.8% | 96.4% | 48.9% | ||||
49.2% | 2.7% | 32.3% | 89.3% | 31.0% |
API usage
Endpoint:
Model card
Architecture Overview:
• 15B parameter multimodal model supporting image-text-to-text reasoning with 131K context window for complex tasks.
• Built on continual pre-training across billions of tokens covering math, code, science, logical reasoning, and multimodal image-text data.
• Simplified chat template for easier output parsing with reasoning steps followed by final response delimiter.
• Fits entirely on a single GPU, making it highly memory-efficient for deployment.
Training Methodology:
• Multi-stage training: continual pre-training, supervised fine-tuning (2.4M samples), and reinforcement learning optimization.
• Training data includes ~15% from NVIDIA Nemotron collection for depth up-scaling and diverse domain coverage.
• RL stage specifically optimizes reasoning efficiency by using fewer tokens, stopping earlier when confident, and giving direct answers on simple queries.
• Incremental lightweight multimodal SFT following text-based supervised fine-tuning phase.
Performance Characteristics:
• Elite reasoning: 88% AIME 2025, 73% GPQA Diamond, 81% LiveCodeBench, 79% MMLU Pro.
• Strong instruction following: 69% IFBench, 83.34% Multi IF, 57.2% Agent IF.
• Enterprise-ready: 69% Tau2 Bench Telecom, 66.67% Tau2 Bench Retail, 58% Tau2 Bench Airline.
• Advanced function calling: 63.5% BFCL v3, 33.2% ComplexFuncBench.
• Multimodal excellence: 72% MMMU validation, 60.28% MMMU-PRO, 79.9% MathVista, 86.04% AI2D Test.
• Reduces reasoning token usage by 30%+ compared to Apriel-1.5 while maintaining or improving task performance.
Applications & use cases
Multimodal Reasoning:
• Visual question answering and complex image understanding tasks requiring deep reasoning.
• Mathematical problem solving from visual inputs including charts, diagrams, and equations.
• Document analysis combining text and visual elements for comprehensive understanding.
Code & Development:
• Code assistance and generation with logical reasoning and multi-step problem decomposition.
• Technical documentation understanding and creation with visual component support.
• Software development workflows requiring reasoning over code structure and logic.
Enterprise Applications:
• Telecom, retail, and airline domain-specific workflows with strong Tau2 Bench performance.
• Complex instruction following and function calling for business automation.
• Agent-based systems requiring reliable instruction adherence and multi-turn interactions.
Knowledge & Question Answering:
• Information retrieval combining text and visual context for accurate responses.
• Scientific and technical question answering with reasoning transparency.
• Educational applications requiring step-by-step problem solving explanations.
Creative & General Purpose:
• Question answering across diverse domains with multimodal context.
• Logical reasoning tasks requiring systematic analysis and structured thinking.
• Real-world workflows where efficiency and single-GPU deployment are critical constraints.
- Model providerServiceNow AI
- TypeReasoningVisionChat
- Main use casesReasoningVision
- DeploymentServerlessMonthly Reserved
- Parameters15B
- Context length128K
- Input modalitiesTextImage
- Output modalitiesText
- ReleasedNovember 28, 2025
- Quantization levelBF16
- External link
- CategoryChat