This website uses cookies to anonymously analyze website traffic using Google Analytics.

Inference that’s fast, simple, and scales as you grow.

import requests

url = ''
headers = {
    'Authorization': 'Bearer ' + os.environ["TOGETHER_API_KEY"],
    'accept': 'application/json',
    'content-type': 'application/json'

data = {
    "model": "togethercomputer/llama-2-70b-chat",
    "prompt": "The capital of France is",
    "max_tokens": 128,
    "stop": ".",
    "temperature": 0.7,
    "top_p": 0.7,
    "top_k": 50,
    "repetition_penalty": 1

response =, json=data, headers=headers)

#  Signup to get your API key here:
#  Documentation for API usage:
import requests

url = ""

headers = {
    "accept": "application/json",
    "Authorization": "Bearer {os.environ['TOGETHER_API_KEY']}"

response =, headers=headers)


#  Signup to get your API key here:
#  Documentation for API usage:
import requests

url = ''
headers = {
    'Authorization': 'Bearer ' + os.environ["TOGETHER_API_KEY"],
    'accept': 'application/json',
    'content-type': 'application/json'

data = {
    "model": "togethercomputer/llama-2-70b-chat",
    "prompt": "The capital of France is",
    "stream_tokens": true

#  Signup to get your API key here:
#  Documentation for API usage:

Perfect for enterprises — performance, privacy, and scalability to meet your needs.


You get faster tokens per second, higher throughput and lower time to first token3. And, all these efficiencies mean we can provide you compute at a lower cost.

speed relative
to tgi, vllm or
other inference services


llama-2 70b


cost relative to gpt-3.5-turbo

6x lower cost6

The Together Inference Engine sets us apart.

We built the blazing fast inference engine that we wanted to use. Now, we’re sharing it with you.

The Together Inference Engine deploys the latest inference techniques:

  • 01

    Flash-Decoding dramatically speeds up attention in the Together Inference Engine while FlashAttention helps with the time to first token  — up to 8x faster generation for very long sequences7. We achieve these by carefully designing how keys and values are loaded in parallel while separately rescaling and combining results to maintain the right attention outputs.

  • 02

    Using CUDA graphs greatly reduces the overhead of launching GPU operations in the Together Inference Engine. It saves time by using a mechanism to launch multiple GPU operations through a single CPU operation.

  • 03

    Developed by our expert research team, the Together Inference Engine layers on multiple techniques. As we do this, we make painstaking optimizations to ensure that we get unmatched efficiency.


  • Customize leading open-source models with your own private data.

  • Achieve higher accuracy on your domain tasks.

  • Customize leading open-source models with your own private data.

  • Start by preparing your dataset — one row per label in a .jsonl file, following the prompt template of the model you are fine-tuning.

  • {"text": "<s>[INST] <<SYS>>\\n{your_system_message}\\n<</SYS>>\\n\\n{user_message_1} [/INST]"}
    {"text": "<s>[INST] <<SYS>>\\n{your_system_message}\\n<</SYS>>\\n\\n{user_message_1} [/INST]"}
  • Validate that your dataset has the right format and upload it.

  • together files check $FILE_NAME 
    together files upload $FILE_NAME 
           "filename" : "acme_corp_customer_support.json",
           "id": "file-aab9997e-bca8-4b7e-a720-e820e682a10a",
           "object": "file"
  • Begin fine-tuning with a single command — with full control over hyper parameters.

  • together finetune create --training-file $FILE_ID 
    --model $MODEL_NAME 
    --wandb-api-key $WANDB_API_KEY 
    --suffix v1 
    --n-epochs 10 
    --n-checkpoints 5 
    --batch-size 8 
    --learning-rate 0.0003
        "training_file": "file-aab9997-bca8-4b7e-a720-e820e682a10a",
        "model_output_name": "username/togethercomputer/llama-2-13b-chat",
        "model_output_path": "s3://together/finetune/63e2b89da6382c4d75d5ef22/username/togethercomputer/llama-2-13b-chat",
        "Suffix": "Llama-2-13b 1",
        "model": "togethercomputer/llama-2-13b-chat",
        "n_epochs": 4,
        "batch_size": 128,
        "learning_rate": 1e-06,
        "checkpoint_steps": 2,
        "created_at": 1687982945,
        "updated_at": 1687982945,
        "status": "pending",
        "id": "ft-5bf8990b-841d-4d63-a8a3-5248d73e045f",
        "epochs_completed": 3,
        "events": [
                "object": "fine-tune-event",
                "created_at": 1687982945,
                "message": "Fine tune request created",
                "type": "JOB_PENDING",
        "queue_depth": 0,
        "wandb_project_name": "Llama-2-13b Fine-tuned 1"
  • Monitor results on Weights & Biases, or deploy checkpoints and test them through the Together Playgrounds.

  • Training loss
Together fine-tuning

Fine-tune models with your data.

Host your fine-tuned model for inference when it’s ready.

  • Together Custom Models is designed to help you train your own state-of-the-art AI model.

  • Benefit from cutting-edge optimizations in the Together Training stack like FlashAttention-2.

  • Once done the model is yours. You retain full ownership of the model that is created, and you can run your model wherever you please.

  • Together Custom Models helps you through all stages of building your state-of-the-art AI model:

  • 01. Start with data design.

  • Incorporate quality signals from RedPajama-v2 (30T tokens) into your model to boost its quality.

  • Choose data based on similarity to Wikipedia, amount of code, or how often the text uses bullets for brevity. For more details on the quality slices in RedPajama-v2 read the blog post.

  • Leverage advanced data selection tools like DSIR to select data slices and then optimize the amount of each slice used with DoReMi.

  • 02. Select model architecture & training recipe.

  • We provide proven training recipes for instruction-tuning, long context optimization, conversational chat, and more.

  • Work in collaboration with our team of experts to determine the optimal architecture and training recipe.

  • 03. Train your model.

  • Press go. Together Custom Models schedules, orchestrates, and optimizes your training jobs over any number of GPUs.

  • Up to

    9x faster training

    with FlashAttention-29

  • Up to

    75% lower cost

    than training on AWS10

  • 04. Tune and align your model.

  • Further customize and tailor your model to follow instructions and your business rules.

  • 05. Evaluate model quality.

  • Evaluate your final model on public benchmarks such as HELM and LM Evaluation Harness, and your own custom benchmark — so you can iterate quickly on model quality.

Together custom models

Build models from scratch

We love to build state-of-the-art models. Use Together Custom Models to train your next generative AI model.

Together GPU Clusters

We offer high-end compute clusters for training and fine-tuning. But premium hardware is just the beginning. Our clusters are ready-to-go with the blazing fast Together Training stack. And our world-class team of AI experts is standing by to help you. Together GPU Clusters has a >95% renewal rate. Come build with us, and see what the hubbub is about.

Cutting-edge hardware

  • The fastest network for distributed training — 3.2Tbps Infiniband.

  • State-of-the-art training clusters with the fastest compute available — Nvidia H100 and A100 GPUs.

  • Directly SSH into the cluster, download your dataset and you’re ready to go.

Training speed comparison graph

Software stack ready for distributed training 

  • Train with the Together Training stack, delivering nine times faster training speed with FlashAttention-2.11

  • Slurm configured out-of-the-box for distributed training and the option to use your own scheduler. 

  • Directly SSH into the cluster, download your dataset and you’re ready to go.

Training speed comparison

Performance metrics

training horsepower


relative to aws

4x lower cost13

training speed

9x faster14


Hardware specs

  • 01

    A100 PCIe Cluster Node Specs

    - 8x A100 / 80GB / PCIe
    - 200Gb non-blocking Ethernet
    - 120 vCPU Intel Xeon (Ice Lake)
    - 960GB RAM
    - 7.68 TB NVMe storage

  • 02

    A100 SXM Cluster Node Specs

    - 8x NVIDIA A100 80GB SXM4
    - 200 Gbps Ethernet or 1.6 Tbps Infiniband configs available
    - 120 vCPU Intel Xeon (Sapphire Rapids)
    - 960 GB RAM
    - 8 x 960GB NVMe storage

  • 03

    H100 Clusters Node Specs

    - 8x Nvidia H100 / 80GB / SXM5
    - 3.2 Tbps Infiniband network
    - 2x AMD EPYC 9474F 18 Cores 96 Threads 3.6GHz CPUs
    - 1.5TB ECC DDR5 Memory
    - 8x 3.84TB NVMe SSDs

Customers Love Us

“Together GPU Clusters provided a combination of amazing training performance, expert support, and the ability to scale to meet our rapid growth to help us serve our growing community of AI creators.”

Demi Guo

CEO, Pika Labs

After pre-training a model using

Together GPU Clusters

, you instruction-tune with

Together Fine-tuning

and host with

Together Inference
Contact us

After selecting a model with

Together Inference

, you can customize it with your own private data using

Together Fine-tuning
Try now

After building your model on

Together GPU Clusters

, you deploy your own Dedicated Instances for your production traffic with

Together Inference
Contact us