Research

Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding

March 12, 2024

By 

Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, Beidi Chen

Introduction

We introduce Sequoia, a scalable, robust, and hardware-aware speculative decoding framework that improves LLM inference speed on consumer GPUs (with offloading), as well as on high-end GPUs (on-chip), without any approximations. We show below that Sequoia—by creating large trees of speculated tokens—can serve Llama2-70B on a single RTX-4090 with an average time between tokens (TBT) as low as 0.57s, which is 8X faster than a highly optimized offloading serving system, and 9X faster than DeepSpeed-Zero-Inference. In the on-chip setting, Sequoia improves the decoding speed of Llama2-7B, Llama2-13B, and Vicuna-33B on an A100 GPU by up to 4.04x, 3.73x, and 2.27x, respectively.

Inference Speed with Sequoia

Offloading Results

GPU CPU-> GPU Bandwidth (GB/sec) Target Model Draft Model Sequoia TBT (sec/token) Baseline TBT (sec/token) Speedup
RTX 4090 31.5 Llama2-70B Llama2-7B 0.57 4.54 7.96x
RTX 4090 31.5 Vicuna-33B TinyVicuna-1B 0.35 1.78 5.09x
RTX 4090 31.5 Llama2-22B TinyLlama-1.1B 0.17 0.95 5.59x
RTX 4090 31.5 Llama2-13B TinyLlama-1.1B 0.09 0.27 3.00x
2080Ti 15.8 Vicuna-33B TinyVicuna-1B 0.87 4.81 5.53x
2080Ti 15.8 Llama2-22B TinyLlama-1.1B 0.53 3.04 5.74x
2080Ti 15.8 Llama2-13B TinyLlama-1.1B 0.34 1.53 4.50x

On-chip Results

GPU HBM->SRAM Bandwidth (GB/sec) Target Model Draft Model Sequoia TBT (ms/token) Baseline TBT (ms/token) Speedup
A100 1,935 Llama2-7B JackFram-68M 6.0 24.2 4.04x
A100 1,935 Llama2-7B JackFram-68M 7.6 24.2 3.18x
A100 1,935 Llama2-13B JackFram-68M 8.4 31.2 3.73x
A100 1,935 Llama2-13B JackFram-68M 9.8 31.2 3.19x
A100 1,935 Vicuna-33B ShearedLlama-1.3B 23.4 53.2 2.27x
A100 1,935 Vicuna-33B ShearedLlama-1.3B 24.3 53.2 2.19x

Sequoia can speed up LLM inference for a variety of model sizes and types of hardware. We evaluate Sequoia with LLMs of various sizes (Llama2-70B-chat, Vicuna-33B, Llama2-22B, Llama2-13B, and Llama2-7B), in both the offloading (on RTX 4090 and 2080Ti GPUs) and on-chip (A100) settings. We prompt with MT-Bench for the offloading setting, and the C4 validation set for on-chip. The evaluation results are listed above. 

Here we show a demo for Llama2-70B inference on a single RTX-4090 (with and without Sequoia. Video plays at 4X speed).

Why Sequoia?

Sequoia significantly accelerates LLM serving in the offloading and on-chip settings via core improvements to speculative decoding. Firstly, Sequoia scales better with the number of speculated tokens—Sequoia leverages a dynamic programming algorithm to search for the tree structure which maximizes the number of accepted tokens at each budget (i.e. the size of the speculated tree). Secondly, by using sampling without replacement, Sequoia is more robust to different decoding temperatures than  top-k sampling and sampling with replacement. Lastly, Sequoia provides a hardware-aware optimizer to select the optimal tree size and depth for each hardware configuration.  For further details, please see our paper.

Left (Scalability): Handcrafted tree structures do not perform well at large speculation budgets, whereas Sequoia does. Right (Robustness): The acceptance rates of different methods, when 5 options are sampled for the next token. Sampling with replacement (SpecTr) fails when temperature is low and Top-k sampling fails with high temperature. Sequoia, leveraging sampling without replacement, attains the highest acceptance rate.

Below we show two examples of tree structures in Sequoia. The left one has 64 nodes which is suitable for on-chip inference, while the right one has 768 nodes, suitable for offloading settings. Our tree construction algorithm allocates more descendents to nodes in previous layers with a higher probability of acceptance.

Conclusion and Future Work

Leveraging Sequoia, anyone can use an RTX 4090 or other consumer (low-cost) GPU to host very strong LLMs like 70B models without approximation, boosting the applications of AI generated content. Sequoia also provides large speedups on high-end GPUs in the small-batch setting, improving the performance of latency-sensitive applications like chatbots.

We believe Sequoia will perform particularly well on future hardware, because its performance scales well with the compute/bandwidth ratio of the hardware, which has been increasing over time (e.g., V100, A100 and H100). Sequoia helps mitigate the bandwidth gaps across the memory hierarchy (SRAM, HBM, RAM, SSD, ...) with smart algorithms, opening new opportunities for AI accelerators design. We are excited to design even faster algorithms for future hardware!

BibTeX

@article{chen2024sequoia,

  title={Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding},

  author={Chen, Zhuoming and May, Avner and Svirschevski, Ruslan and Huang, Yuhsun and Ryabinin, Max and Jia, Zhihao and Chen, Beidi},

  journal={arXiv preprint arXiv:2402.12374},

  year={2024}

}

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →