The AI Acceleration Cloud

AI pioneers train, fine-tune, and run frontier models on our GPU cloud platform.

200+ generative AI models

Build with open-source and specialized multimodal models for chat, images, code, and more. Migrate from closed models with OpenAI-compatible APIs.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Try now
together.ai

End-to-end platform for the full generative AI lifecycle

Leverage pre-trained models, fine-tune them for your needs, or build custom models from scratch. Whatever your generative AI needs, Together AI offers a seamless continuum of AI compute solutions to support your entire journey.

  • Inference

    The fastest way to build with pretrained AI models:

    • ✔ Serverless or dedicated endpoints

    • ✔ Deploy in enterprise VPC

    • ✔ SOC 2 and HIPAA compliant

  • Fine-Tuning

    Tailored customization for your tasks

    • ✔ Complete model ownership

    • ✔ Fully tune or adapt models

    • ✔ Easy-to-use APIs

    • Full Fine-Tuning
    • LoRA Fine-Tuning
  • GPU Clusters

    Full control for massive AI workloads

    • ✔ Accelerate large model training

    • ✔ GB200, B200, and H100 GPUs

    • ✔ Pricing from $1.75 / hour

Run
models

Train

Models

Speed, cost, and accuracy. Pick all three.

SPEED RELATIVE TO VLLM

4x FASTER

LLAMA-3 8B AT FULL PRECISION

400 TOKENS/SEC

COST RELATIVE TO GPT-4o

11x lower cost

Why Together Inference

Powered by the Together Inference Engine, combining research-driven innovation with deployment flexibility.

Control your IP.
Own your AI.

Fine-tune open-source models like Llama on your data and run them on Together Cloud or in a hyperscaler VPC. With no vendor lock-in, your AI remains fully under your control.

together files upload acme_corp_customer_support.jsonl
  
{
  "filename" : "acme_corp_customer_support.json",
  "id": "file-aab9997e-bca8-4b7e-a720-e820e682a10a",
  "object": "file"
}
  
  
together finetune create --training-file file-aab9997-bca8-4b7e-a720-e820e682a10a
--model together compute/RedPajama-INCITE-7B-Chat

together finetune create --training-file $FILE_ID 
--model $MODEL_NAME 
--wandb-api-key $WANDB_API_KEY 
--n-epochs 10 
--n-checkpoints 5 
--batch-size 8 
--learning-rate 0.0003
{
    "training_file": "file-aab9997-bca8-4b7e-a720-e820e682a10a",
    "model_output_name": "username/togethercomputer/llama-2-13b-chat",
    "model_output_path": "s3://together/finetune/63e2b89da6382c4d75d5ef22/username/togethercomputer/llama-2-13b-chat",
    "Suffix": "Llama-2-13b 1",
    "model": "togethercomputer/llama-2-13b-chat",
    "n_epochs": 4,
    "batch_size": 128,
    "learning_rate": 1e-06,
    "checkpoint_steps": 2,
    "created_at": 1687982945,
    "updated_at": 1687982945,
    "status": "pending",
    "id": "ft-5bf8990b-841d-4d63-a8a3-5248d73e045f",
    "epochs_completed": 3,
    "events": [
        {
            "object": "fine-tune-event",
            "created_at": 1687982945,
            "message": "Fine tune request created",
            "type": "JOB_PENDING",
        }
    ],
    "queue_depth": 0,
    "wandb_project_name": "Llama-2-13b Fine-tuned 1"
}

Forge the AI frontier. Train on expert-built GPU clusters.

Built by AI researchers for AI innovators, Together GPU Clusters are powered by NVIDIA GB200, H200, and H100 GPUs, along with the Together Kernel Collection — delivering up to 24% faster training operations.

  • Top-Tier NVIDIA GPUs

    NVIDIA's latest GPUs, like GB200, H200, and H100, for peak AI performance, supporting both training and inference.

  • Accelerated Software Stack

    The Together Kernel Collection includes custom CUDA kernels, reducing training times and costs with superior throughput.

  • High-Speed Interconnects

    InfiniBand and NVLink ensure fast communication between GPUs, eliminating bottlenecks and enabling
rapid processing of large datasets.

  • Highly Scalable & Reliable

    Deploy 16 to 1000+ GPUs across global locations, with 99.9% uptime SLA.

  • Expert AI Advisory Services

    Together AI’s expert team offers consulting for custom model development
and scalable training best practices.

  • Robust Management Tools

    Slurm and Kubernetes orchestrate dynamic AI workloads, optimizing training and inference seamlessly.

Training-ready clusters – Blackwell and Hopper

THE AI
ACCELERATION
CLOUD

BUILT ON LEADING AI RESEARCH.

Sphere

Innovations

Our research team is behind breakthrough AI models, datasets, and optimizations.

Customer Stories

See how we support leading teams around the world. Our customers are creating innovative generative AI applications, faster.

How Hedra Scales Viral AI Video Generation with 60% Cost Savings

From AWS to Together Dedicated Endpoints: Arcee AI's journey to greater inference flexibility

When Standard Inference Frameworks Failed, Together AI Enabled 5x Performance Breakthrough

Build on the AI Native Cloud

Engineered for AI natives, powered by cutting-edge research

The Together AI Platform

Develop and scale AI-native apps

  • Reliable at production scale

    Built for scale, with customers going to trillions of tokens in a matter of hours without any depletion in experience.

  • Industry-leading unit economics

    Continuously optimizing across inference and training to keep improving performance, delivering better total cost of ownership.

  • Frontier AI systems research

    Proven infra and research teams ensure the latest models, hardware, and techniques are made available on day 1.

Full stack development
for AI-native apps

Model Library

Model Library

Evaluate and build with open-source and specialized models for chat, images, videos, code, and more.

Migrate from closed models with OpenAI-compatible APIs.

Start building now

Inference

Inference

Reliably deploy models with unmatched price-performance at scale. Benefit from inference-focused innovations like the ATLAS speculator system and Together Inference Engine.

Deploy on hardware of choice, such as NVIDIA GB200 NVL72 and GB300 NVL72.

Learn more

Fine-Tuning

Fine-Tuning

Fine-tune open-source models with your data to create task-specific, fast, and cost-effective models that are 100% yours.

Easily deploy into production through Together AI's highly performant inference stack.

Learn more

Pre-Training

Pre-Training

Securely and cost effectively train your own models from the ground up, leveraging research breakthroughs such as Together Kernel Collection (TKC) for reliable and fast training.

Contact us

GPU Clusters

GPU Clusters

Scale globally with our fleet of data centers (DCs) across the globe.

These DCs feature frontier hardware such as NVIDIA GB200 NVL72 and GB300 NVL72.

Developers can go from self-serve instant clusters to custom AI factories for high-scale workloads.

Learn more

Industry leading AI research and open-source contributions

  • Flash Attention

  • Mixture of Agents

  • Dragonfly

  • Red Pajama Datasets

  • DeepCoder

  • Open Deep Research

  • Flash Decoding

  • Open Data Scientist Agent

Customer stories

AI-native companies partner with Together AI to build the next generation of apps

How Hedra Scales Viral AI Video Generation with 60% Cost Savings

From AWS to Together Dedicated Endpoints: Arcee AI's journey to greater inference flexibility

When Standard Inference Frameworks Failed, Together AI Enabled 5x Performance Breakthrough

Proven results

Get to market faster and save costs with breakthrough innovations

  • Faster
    Inference

    3.5x

  • Faster
    Training

    2.3x

  • Lower
    Cost

    20%

  • Network
    Compression

    117x

Start running inference with the best price-performance at scale