Research

Introducing AutoJudge: Streamlined inference acceleration via automated dataset curation

December 3, 2025

By 

ROMAN GARIPOV, FEDOR VELIKONIVTSEV, IVAN ERMAKOV, RUSLAN SVIRSCHEVSKI, VAGE EGIAZARIAN, MAX RYABININ

Summary

We introduce AutoJudge, a method that accelerates large language model (LLM) inference through task-specific lossy speculative decoding. Instead of matching the target model’s output distribution token by token, this method identifies which specific generated tokens affect downstream quality. Compared to prior approaches, AutoJudge does not require manual annotation, as it employs a classifier trained in a self-supervised manner.

AutoJudge can accept up to 40 draft tokens per verification cycle with only a slight accuracy drop, achieving 1.5–2x speedups over standard speculative decoding, and is easy to integrate into existing LLM inference frameworks. We will be presenting our research findings on AutoJudge at NeurIPS 2025 — come meet the team to learn more!

Speculative decoding speeds up generation by pairing a small “draft” model with the main “target” model. The draft proposes several next tokens; the target runs in parallel to verify them. Tokens that match the target are accepted; the first mismatch (and everything after) is rejected. This keeps the output distribution identical to the target model’s own decoding.

In practice, strict distribution matching isn’t always necessary. Lossy variants trade a tiny amount of quality for more speed. Judge Decoding is one such approach: it only rejects a mismatch if accepting it would harm the final answer. For example, math errors or logical bugs matter, but minor stylistic changes often don’t. Our work builds directly on this idea.

The catch with Judge Decoding is data: it needs humans to label “critical” mismatching tokens for each task, which is expensive and doesn’t transfer perfectly across domains. AutoJudge removes this bottleneck by automatically mining those important tokens—no human annotators required. 

The AutoJudge Method

Figure 1. The data collection stage of AutoJudge

AutoJudge consists of the following stages:

  1. Automatically mine “important” mismatches
    For a prompt, generate a target answer and locate where draft and target models disagree. Iteratively swap draft ↔ target tokens and re-evaluate the task (e.g. GSM8K answer equality or code unit tests). A mismatch is important if keeping the draft token breaks the final answer; otherwise it’s unimportant. This semi-greedy pass reliably surfaces at least one important token whenever answers differ.
  2. Train a tiny classifier on existing embeddings
    We use a simple logistic regression fed by transformer hidden states already computed during speculative decoding. Concatenating draft and target token embeddings works best and remains robust across regularization choices and small architectural variants.
  3. Accept “unimportant” mismatches at verification time
    During the verification phase—exactly where the baseline would reject a mismatching draft token—we call the classifier. If it predicts that the token is unimportant, we accept it and keep moving forward, increasing accepted tokens per speculation cycle. The approach is compatible with standard, tree-based, and single-model multi-head speculative decoding methods, and slots into popular stacks like vLLM, TensorRT-LLM, and TGI. In practice, we pick a high-recall threshold (≥90%) to safeguard accuracy while still skipping a large fraction of tokens.
Figure 2. Example of extra accepted tokens resulting in faster inference

In Figure 2, we demonstrate how AutoJudge can accept more tokens during the speculative decoding step. AutoJudge adds a tiny “judge” that asks, at each mismatch, whether the difference actually changes the final answer. In the example, the mismatch is a harmless wording — like “equals” vs “becomes” — and we accept it and keep the rest of the drafted tokens. If it would change correctness — like “+” vs “−” in a math step — we reject it. By only rejecting critical mismatches, we keep longer chunks from the draft, so more tokens are accepted at once and generation is faster with little impact on quality.

Performance benchmarks

Accuracy-acceptance tradeoffs

Figure 3: Accuracy and the number of accepted tokens on GSM8K for (left) 8-shot Llama-3.2 1B draft / Llama-3.1 8B target and (right) 0-shot Llama-3.1 8B draft / Llama-3.1 70B target (all Instruct).

In Figure 3 we show how AutoJudge shifts the speed–quality frontier: as the number of accepted draft tokens per cycle increases (x-axis), AutoJudge (red) stays near the accuracy of lossless speculative decoding while accepting more draft tokens, unlike a naive Top-K baseline whose accuracy drops quickly. This holds in both model pairs (1B/8B on the left, 8B/70B on the right), so you can choose a threshold that yields higher tokens/s with minimal accuracy cost. In Figure 3 (right), we show that it is possible to safely accept three times more tokens, paying only a 1% accuracy drop, demonstrating that speculative decoding can safely accept up to 45 tokens with minimal loss in quality.

Inference speedup

We integrated AutoJudge into vLLM’s speculative decoding and measured end-to-end tokens/s on GPUs. (Setups included A100/H100; see notes below for model pairs.)

Mathematical reasoning (GSM8K)

Across model pairs, AutoJudge delivers consistent throughput gains with small accuracy trade-offs:

  • Llama-3.1-405B (target) / 8B (draft), 8xH100: 91.5% (≈4% drop), 60.1 tokens/s, 1.20×.
  • Llama-3.1-70B (target) / 8B (draft), 4xA100: 89.9% (≈2% drop), 107.4 tokens/s, 1.49×.
  • Llama-3.1-8B (target) / 1B (draft), 1xA100: 80.2% (≈3% drop), 169.2 tokens/s, 1.14×.
    Baselines: 50.0 (405B), 72.3 (70B), 147.7 (8B) tokens/s.

Programming (LiveCodeBench)

AutoJudge automatically identifies critical tokens for code and boosts acceptance rates:

  • Llama-3.1-70B (target) / 8B (draft): Pass@1 28.0% (≈3% drop), ~35 accepted tokens/cycle (≈3.5×). Baseline accepted tokens: ~10
  • Llama-3.1-8B (target) / 1B (draft): Pass@1 14.5% (≈2.5% drop), ~30 accepted tokens/cycle (≈2.3×). Baseline accepted tokens: ~13

Offloading scenarios (bandwidth-limited)

When the link bandwidth is the bottleneck, longer draft windows become viable and speedups amplify:

  • 8B → 70B (GSM8K): 2.4 tokens/s, 1.98×, accuracy 90.4% (≈3% drop).
  • 8B → 70B (GSM8K): 1.4 tokens/s, 1.20×, accuracy 95.4% (≈+0.5%).
    Baseline: 1.19 tokens/s.

Composing with EAGLE-2

AutoJudge stacks with EAGLE-2 (which drafts from the target’s hidden states, no separate draft model). On GSM8K (0-shot) with Llama-3.1-8B-Instruct, AutoJudge adds ~8–20% tokens/s over EAGLE at small accuracy deltas: 96.8, 102.6, 107.5 tokens/s vs 89.8 baseline, with accuracies 81.3%, 81.0%, 78.1%.

Limitations & practical notes

  • Speedups depend on how often mismatches are genuinely unimportant for the metric (e.g., answer equality, unit tests). Tasks such as creative writing often leave less headroom; see more experiments (including long-context GSM and writing) in the paper appendix.
  • One can favor high-recall classifier thresholds (≥90%) to protect quality while still skipping many tokens. Threshold values should ideally be tuned per task.

Conclusion

AutoJudge offers a simple and fully automated algorithm that accelerates the speculative decoding loop: accept harmless mismatches, save target model calls, and go faster. It removes manual labeling from judge-style methods, learns what matters per task, and uses a tiny classifier on embeddings you already compute to ensure low runtime overhead. 

Try it

References

[1] Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding, 2023. URL https://arxiv.org/abs/2211.17192

[2] Gregor Bachmann, Sotiris Anagnostidis, Albert Pumarola, Markos Georgopoulos, Artsiom Sanakoyeu, Yuming Du, Edgar Schönfeld, Ali Thabet, Jonas Kohler. Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment, 2025. URL: https://arxiv.org/abs/2501.19309 

[3] Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle-2: Faster inference of language models with dynamic draft trees, 2024. URL https://arxiv.org/abs/2406.16858

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →