Models / Mistral AI
Chat

Mistral Small 3

24B model rivaling GPT-4o mini, and larger models like Llama 3.3 70B. Ideal for chat use cases like customer support, translation and summarization.

About model

Mistral Small 3 is a 24B parameter large language model with state-of-the-art capabilities, suitable for fast response conversational agents, low latency function calling, and subject matter experts via fine-tuning. It is exceptionally knowledge-dense and supports dozens of languages. Ideal for hobbyists, organizations, and enterprises handling sensitive data.

Performance benchmarks

Model

AIME 2025

GPQA Diamond

HLE

LiveCodeBench

MATH500

SWE-bench verified

45.3%

Related open-source models

Competitor closed-source models

Claude Opus 4.6

90.5%

34.2%

78.7%

OpenAI o3

83.3%

24.9%

99.2%

62.3%

OpenAI o1

76.8%

96.4%

48.9%

GPT-4o

49.2%

2.7%

32.3%

89.3%

31.0%

    Related models
    • Model provider
      Mistral AI
    • Type
      Chat
    • Main use cases
      Chat
      Medium General Purpose
      Function Calling
    • Features
      Function Calling
    • Deployment
      On-Demand Dedicated
      Monthly Reserved
    • Parameters
      24B
    • Context length
      32K
    • Input price

      $0.10 / 1M tokens

    • Output price

      $0.30 / 1M tokens

    • Input modalities
      Text
    • Output modalities
      Text
    • Released
      January 28, 2025
    • Quantization level
      FP16
    • External link
    • Category
      Chat