Llama 4 Scout
SOTA 109B model with 17B active params & large context, excelling at multi-document analysis, codebase reasoning, and personalized tasks.
Model | AIME 2025 | GPQA Diamond | HLE | LiveCodeBench | MATH500 | SWE-bench verified |
|---|---|---|---|---|---|---|
Llama 4 Scout | 51.8% | Related open-source models | Competitor closed-source models | |||
90.5% | 34.2% | 78.7% | ||||
83.3% | 24.9% | 99.2% | 62.3% | |||
76.8% | 96.4% | 48.9% | ||||
49.2% | 2.7% | 32.3% | 89.3% | 31.0% |
This model is not available on Together’s Serverless API.
Deploy this model on an on-demand Dedicated Endpoint or pick a supported alternative from the Model Library.
- TypeChatVision
- Main use casesChatFunction CallingVision
- FeaturesFunction Calling
- Fine tuningSupported
- Parameters109B
- Context length1M
- Quantization levelFP16
- External link
- CategoryChat