Models / Mixedbread AI
Rerank

Mxbai Rerank Large V2

1.5B-parameter RL-trained reranking model achieving state-of-the-art accuracy across 100+ languages with 8K context, outperforming Cohere and Voyage.

About model

Mxbai Rerank Large V2 is a powerful reranker model offering state-of-the-art performance and strong efficiency, with multilingual support for 100+ languages. It is suitable for users seeking a simple end-to-end retrieval solution.

  • Model card

    🌟 Features

    • state-of-the-art performance and strong efficiency
    • multilingual support (100+ languages, outstanding English and Chinese performance)
    • code support
    • long-context support

    Benchmark Results

    Model BEIR Avg Multilingual Chinese Code Search Latency (s)
    mxbai-rerank-large-v2 57.49 29.79 84.16 32.05 0.89
    mxbai-rerank-base-v2 55.57 28.56 83.70 31.73 0.67
    mxbai-rerank-large-v1 49.32 21.88 72.53 30.72 2.24

    *Latency measured on A100 GPU

    Training Details

    The models were trained using a three-step process:

    1. GRPO (Guided Reinforcement Prompt Optimization)
    2. Contrastive Learning
    3. Preference Learning

Related models
  • Model provider
    Mixedbread AI
  • Type
    Rerank
  • Main use cases
    Rerank
  • Deployment
    Serverless
    On-Demand Dedicated
  • Parameters
    1.5B
  • Context length
    8192
  • Input price

    $0.10 / 1M tokens

  • Input modalities
    Text
  • Output modalities
    Text