Company

Fine-tuning API: Introducing long-context training, conversation data support and more configuration options

November 25, 2024

By 

Max Ryabinin, Artem Chumachenko, George Grigorev, Arsh Zahed, Gleb Vazhenin

As organizations race to gain competitive advantages with generative AI, fine-tuning Large Language Models has become critical for enhancing performance on specific tasks. Today, we are launching several new features for our Fine-tuning API that make it easier for ML teams to customize open models. We've been working with companies like Salesforce and Zomato to improve our fine-tuning capabilities and enable these companies to easily fine-tune models with their own data. Now, we are excited to unveil these capabilities to all users of the Together platform.

Here are the new Fine-tuning API features and updates at a glance:

  • Longer-context fine-tuning: Train models on extended context windows for handling large documents and complex data inputs. We now support up to 32K context length for Llama 3.1 8B and 70B fine-tuning and inference.
  • Conversational and instruction data format support: Feed conversation or instruction data into the Fine-Tuning API directly without the need to manually format examples, and easily choose between training on complete examples or model outputs only.
  • Training quality improvements: Get even more capable models with no changes in hyperparameters, inputs or cost of fine-tuning jobs.
  • Validation dataset support: Test models on unseen data during training to assess how they generalize.
  • Quality-of-life improvements: We offer new options to customize your training jobs, improve experiment tracking via Weights & Biases, and provide an automated batch size setting to easily start more efficient fine-tuning runs.

Below we will describe each of these new features in more detail and show you how to use them in your fine-tuning experiments on the Together platform.

{{custom-cta-1}}

Longer-context fine-tuning

Even the most capable language models of today can struggle with processing long-sequence data. Training on longer examples allows models to retain and interpret broader sections of content, making it invaluable for tasks like document review or long-form generation.

To support this use case, we now offer fine-tuning of Llama 3.1 8B and 70B models with up to 32k context lengths. To use instruction-tuned models with extended context, just specify meta-llama/Meta-Llama-3.1-8B-32k-Instruct-Reference or meta-llama/Meta-Llama-3.1-70B-32k-Instruct-Reference as the model name when you create a fine-tuning job. See the full list of models supported for long-context training in our docs.

To learn more about potential applications of long-context fine-tuning, read our deep-dive blogpost, where we showcase this feature on a synthetic repetition task, as well as on long document summarization. We show how a fine-tuned version of Llama 3.1-8B outperforms its 70B base counterpart by over 10% in terms of the ROUGE score. This example shows how fine-tuning can result in both lower inference costs and better task performance.

Conversation and instruction data format support

Many developers are working on applications like chatbots and virtual assistants, which rely on high-quality, context-aware responses. Conversational and instruction-based data formats streamline data preparation by allowing developers to feed conversation histories and instruction datasets directly into the fine-tuning API using standard formats. This also eliminates the need for manual reformatting of examples, allowing you to easily switch between different models available in the API — if you train an instruction-finetuned model, the correct chat template will be used automatically. Lastly, the conversation format is directly compatible with our chat completions API for inference, as well as the OpenAI fine-tuning API data format: now, you can easily upload your existing data to Together and start training open models.

To submit a fine-tuning job with a conversation data format, you simply need to create and upload a JSON Lines (JSONL) file, with each line containing a JSON object with a list of messages, where each message consists of the content and the role. Here is an example of one line from such a dataset:


{
  "messages": [
    {"role": "system", "content": "This is a system prompt."},
    {"role": "user", "content": "Hello, how are you?"},
    {"role": "assistant", "content": "I'm doing well, thank you! How can I help you?"},
    {"role": "user", "content": "Can you explain machine learning?"},
    {"role": "assistant", "content": "Machine learning is..."}
  ]
}

For an example of an instruction dataset, read the section about supported data formats in our docs. Also, for both dataset formats you can choose if you wish to train the model on complete examples or only the assistant messages (or completions in case of instruction data) with the --train-on-inputs option. By default, we train only on the model outputs, but changing the behavior with --train-on-inputs false can lead to better results on your specific dataset.

Check out our deep-dive blogpost about conversation data fine-tuning, which provides a complete example of how to use this new feature to improve the ability of Llama 3.1 to answer questions about dialogues. With this convenient way to submit conversation datasets, the exact match score on the task improves from 0.043 to 0.62! While it was possible to achieve similar gains before with manual formatting, now it’s much easier, as you can directly submit structured data and effortlessly switch between models with different chat templates.

Training quality improvements

We have made a range of improvements to the training procedure that improve the capabilities of models that you get at the end of fine-tuning. Now, the fine-tuning jobs you create through our service with the same parameters as before will get even better — at no additional cost to you.

To demonstrate the impact of those changes, we ran several experiments on our API, using a range of sufficiently complex benchmarks. We used 2 categories of benchmarks: mathematics (MATH-Hard and GSM-Plus) and knowledge-based (MMLU-Pro Humanities subset). For training, we used 100k samples from OpenMathInstruct-2 and the auxiliary train set of MMLU for mathematics and knowledge tasks correspondingly.

We ran our experiments with the instruction-finetuned version of Llama 3.1-8B, as this is one of the most popular models on our platform. For both task categories, we trained for 1 epoch, reporting the average of results across 3 runs.

Benchmark Original Llama 3.1 8B-Instruct Fine-tuned via Together
Before the update After the update
MATH-Hard 11.6 7.4 14.6
GSM-Plus 53.8 47.4 54.1
MMLU-Pro Humanities 37.6 33.9 38.4

The results of our experiments can be seen in the table above: as you can see, there is a noticeable performance boost compared to the prior fine-tuning results (ranging from 10% to almost 200%). Importantly, the model improves even when compared to a strong baseline: original Llama 3.1 performs very well on these benchmarks already, which can be attributed to its diverse post-training dataset that might contain tasks similar to ours. When you upload your own datasets to Together, you should expect to get small yet consistent improvements in their quality after the update.

Validation dataset support

With a validation dataset, you can now monitor the loss of the model on unseen data during training to make sure it can generalize to new examples. This can guide the development process, helping you choose the optimal hyperparameters and the overall training setup before proceeding to deployment.

To run a job with periodic evaluations, upload a validation dataset in the regular way, and then submit a job with the following new arguments:

  • --validation-file  to specify the ID of the file that you have uploaded to run validation
  • --n-evals to specify the total number of evaluations that will run during training.

An example command to start fine-tuning with a validation dataset is shown below:


together fine-tuning create \
  --training-file $TRAINING_FILE_NAME \
  --validation-file $VALIDATION_FILE_NAME \
  --n-evals 10 \
  --model "meta-llama/Meta-Llama-3.1-8B-Instruct-Reference"

Learn more about working with validation datasets in our documentation.

Quality-of-life enhancements

In addition to the above, we have added several smaller improvements to the service, which can help you manage your experience on the platform more easily and achieve better results with fine-grained hyperparameter choices.

  • Enhanced Weights & Biases integration: you can now specify the project name or the run name for your experiment, as well as change the W&B base URL if you are running a custom instance of Weights & Biases.
  • Automated batch size setting: to get the highest training efficiency, you can now create fine-tuning jobs that will use the largest possible batch size for any model you choose. To enable this, just set --batch-size max when you use the Together Python client to submit a fine-tuning request, or set the batch size to max . This way, you will not need to manually check the limits for every model, and the training will run as fast as possible.
  • More options for the learning rate schedule: for a better control over how the learning rate is adjusted over time, we added the --warmup-ratio parameter to control the percentage of training steps used for warmup, as well as --min-lr-ratio , which defines the final learning rate relative to the peak value.
  • Configurable weight decay and gradient clipping: it is now possible to add weight decay to control the regularization strength, and to control the gradient clipping behavior by increasing the maximum gradient norm or disabling clipping completely.

All of those parameters are documented in our API reference and are ready to use today.

Why choose the Together Fine-Tuning API?

  • Improve model quality and decrease costs: Our platform allows you to specialize the best open models on your tasks, bringing smaller and more efficient LLMs to the level of performance usually achieved by much larger models.
  • Full ownership and flexibility: Unlike some LLM customization services, Together Fine-Tuning API allows users to retain complete control over their models after training, offering an option to download final and intermediate checkpoints to run them locally.
  • High configurability: We offer a broad choice of fine-tuning models and training parameters that you can use in your experiments, including a variety of supported data formats and training hyperparameters.
  • Iterate and experiment faster: Together Fine-Tuning API supports rapid testing and optimization, enabling fast-paced iteration cycles.

Get started with fine-tuning on Together AI

We are excited to see what you will build using our new fine-tuning API features. Check out the docs to learn more about them and get started with the API. Join us on December 12 for a webinar about fine-tuning, chat with the community on Discord, or contact us directly if you’re interested in applying fine-tuning for your use cases.

Get started with Together Fine-tuning

Start customizing models with your own data and see improved task accuracy.

Get started with Together Fine-tuning

Start customizing models with your own data and see improved task accuracy.

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →