Inference

Announcing Serverless Multi-LoRA: Fine-tune and deploy hundreds of adapters for model customization at scale

December 18, 2024

By 

Together AI

Today we're launching comprehensive LoRA (Low-Rank Adaptation) support on Together Serverless, enabling you to fine-tune and deploy hundreds of custom LoRA adapters while only paying base model per-token prices to run them. Fine-tuning is a powerful tool to improve model performance for specific tasks e.g. style, formatting, and translation – but managing multiple fine-tuned models traditionally comes with significant complexity and cost. Our platform solves this by letting you serve hundreds of custom LoRA adapters alongside a single base model, dramatically reducing costs while delivering high-performance customized models without the headaches of infrastructure management.

Today's launch includes:

  • Serverless LoRA inference with pay-per-token pricing. Upload your own LoRA adapters (for example from Hugging Face) and run inference on them with any of our compatible serverless models - including popular models like Llama 3.1 and Qwen 2.5.
  • Multi-LoRA support on our serverless platform, enabling dynamic adapter switching at scale. Run hundreds of models for the same price as the base model. 
  • LoRA fine-tuning API for fine-tuning custom model adapters. Seamlessly test and deploy your fine-tuned LoRAs through our playground or APIs. We support LoRA fine-tuning for several base models, with the flexibility to download your adapters.

We're working with leading companies like Salesforce, Zomato and The Washington Post to bring their fine-tuned models from experimentation to production, while partnering with fine-tuning platforms like OpenPipe to power inference for their customers.

“We use LoRAs to help our customers to train and deploy heavily customized models faster. Together AI's serverless multi-LoRA inference scales well while maintaining high throughput and low latency. We’re excited to partner with them to enable our customers to bring fine-tuned models directly into production seamlessly.”

- Kyle Corbitt, Founder of OpenPipe

{{custom-cta-1}}

LoRA: A powerful method for efficient fine-tuning

LoRA (Low-Rank Adaptation) is an efficient approach to fine-tuning models. Rather than modifying the entire model's weights, LoRA creates lightweight "adapters" that require less memory for training and can be dynamically loaded at run-time, while keeping the base model unchanged. This approach significantly reduces infrastructure costs and complexity, as you can use a single base model and swap smaller task-specific adapters as needed. For example, you could create separate adapters for different tasks like language translation and text summarization, then dynamically switch between them at runtime in your application, depending on the request. This flexibility allows you to serve multiple use cases without needing to deploy separate models for each application, while still achieving strong task-specific performance.

The power of multi-LoRA: Run custom AI models at scale

Multi-LoRA unlocks the ability to serve multiple AI adapters with a single base model and swap between them at runtime. Before, if you had 100 different fine-tuned models, you would need to host and deploy each model on its own infrastructure. With multi-LoRA you can serve hundreds of LoRA adapters on the same infrastructure as the base model, leading to significant cost savings and rapid experimentation.

Multi-LoRA enables diverse use cases across industries: Marketing agencies can create adapters for each client's voice and style, while enterprise teams can deploy specialized adapters for various tasks—from customer service automation to fraud detection—all using a shared base model. For example, an IT department might use different adapters for ticket classification, bug summarization, and documentation chatbots. Multi-LoRA's flexibility also makes it valuable for A/B testing different fine-tuning approaches and managing versions of individual adapters.

Deploying this multi-LoRA architecture on platforms like Amazon SageMaker requires complex memory and batching configurations to manage GPU resources and adapter swapping. Together Serverless eliminates this complexity by automatically handling the serving and scaling of hundreds of LoRAs while maintaining high performance and efficiency, at the same cost as the base model.

Why run LoRAs on Together AI

Cost-efficient model customization

Running multiple fine-tuned models traditionally requires separate instances and infrastructure for each model. With LoRAs on Together AI, you can serve hundreds of custom adapters at the same cost as running the base model alone. In addition to that, with our serverless infrastructure you only pay-per-token for using your fine-tuned models, eliminating spend on idle infrastructure.

Faster iteration and experimentation

Developing and testing multiple fine-tuned models typically involves significant waiting time as GPUs spin up and models load. With our serverless infrastructure, you can instantly test new adapters without waiting. This enables rapid iteration cycles whether you're uploading existing LoRA adapters from Hugging Face or testing your own fine-tuned versions.

Optimized performance at scale

Running LoRA adapters dynamically (at run-time) typically introduces some performance overhead, forcing organizations to choose between speed, cost and flexibility when running fine-tuned models. At Together AI, our optimized serving system eliminates this trade-off, maintaining up to 90% of base model performance, while providing incredibly flexible per-token pricing. These results are driven by the Together Kernel Collection (TKC) — featuring innovations like Together FlashAttention 3 — and other advanced techniques such as Cross-LoRA Continuous Batching, which parallelizes heterogeneous requests to maximize GPU utilization, and Adapter Prefetching, which scales seamlessly without overloading GPU memory. Our serverless infrastructure is specifically tuned for efficient adapter serving, while our support for FP8 Turbo models ensures faster, more memory-efficient inference. Speculative decoding further accelerates generation, enabling us to deliver a scalable, high-performance LoRA serving solution despite the inherent challenges of runtime adapter computation.

Easily fine-tune your own LoRA adapters with the Together Fine-tuning API

Our Fine-tuning API supports LoRA fine-tuning for several base models in our catalog, including Llama and Qwen model families. The process is straightforward: upload your dataset and start training your LoRA adapters. We provide flexible training configurations to match your specific use case, such as: 

  • Configurable LoRA rank: trade off between the fine-tuning capacity and the size of the final adapter
  • Layer-specific adapter application for targeted model improvements: apply LoRA to all linear layers, or just a selection of parameters (for example, query/key/value projections in attention)
  • Adjustable LoRA alpha parameter to control the fine-tuning strength

Once training is complete, you can either download your LoRA adapter, or immediately start using it on our serverless platform via the playground or APIs. Your fine-tuned model will be ready for inference at the same cost as the base model – you only pay per token used.

Learn more about LoRA fine-tuning in our docs.

Getting Started

To get started using LoRA on Together AI:

Interested in Multi-LoRA?

Fine-tune and deploy hundreds of custom model adapters while only paying per token prices to run them.

Interested in enterprise Multi-LoRA deployments?

Deploy custom AI models at scale and experiment faster.

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →