Company

Introducing the Together AI Batch API: Process Thousands of LLM Requests at 50% Lower Cost

June 11, 2025

By 

Alay Dilipbhai Shah, Rajas Bansal, Mark Jones, Yogish Baliga, Ted Cui, Ameen Patel, Derek Dowling, Jordan Kail, Justin Foutts, Bryan Wade, Will Van Eaton, Anirudh Jain

We're excited to announce the launch of our Batch API, a solution for businesses and developers who need to process large volumes of LLM requests efficiently and cost-effectively. Whether you're running evaluations, classifying large datasets, generating marketing content, or processing data transformations, the Batch API delivers enterprise-grade performance at half the cost of real-time inference.

Why Batch Processing?

Not all AI workloads require immediate responses. Many use cases—from synthetic data generation to offline summarization—can wait hours for results. By processing these requests asynchronously during off-peak times, we can offer the same high-quality outputs at significantly reduced costs while maintaining the reliability you depend on. Most batches complete within hours, with a best-effort 24-hour processing window.

Key Benefits

50% Cost Savings

Process your non-urgent workloads with introductory pricing at half the cost of real-time API calls. Scale your AI inference without scaling your budget.

Large Scale Processing

Submit up to 50,000 requests in a single batch file (up to 100MB). Batch rate limits are independent and separate from your real-time usage.

Best-effort completion within 24 hours with real-time progress tracking through multiple status stages—from validation to completion.

Simple Integration

Upload a JSONL file with your requests. Monitor progress through the Batch API and download results when complete.

Supported Models

Launched with support for 15 cutting-edge models:

Model ID Size
deepseek-ai/DeepSeek-R1 685B
deepseek-ai/DeepSeek-V3 671B
meta-llama/Llama-3-70b-chat-hf 70B
meta-llama/Llama-3.3-70B-Instruct-Turbo 70B
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 17B
meta-llama/Llama-4-Scout-17B-16E-Instruct 17B
meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo 405B
meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo 70B
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo 8B
mistralai/Mistral-7B-Instruct-v0.1 7B
mistralai/Mixtral-8x7B-Instruct-v0.1 8×7B
Qwen/Qwen2.5-72B-Instruct-Turbo 72B
Qwen/Qwen2.5-7B-Instruct-Turbo 7B
Qwen/Qwen3-235B-A22B-fp8-tput 235B
Qwen/QwQ-32B 32B

How It Works

  1. Prepare Your Requests: Format your requests in a JSONL file, with each line containing a single request with a unique identifier
  2. Upload & Submit: Use our Files API to upload your batch and create the job
  3. Monitor Progress: Track your job through validation, queuing, processing, and aggregation stages
  4. Download Results: Retrieve your completed results in a structured format, with failed requests detailed in a separate error file
    
# Upgrade to the latest together python package: pip install --upgrade together

from together import Together

client = Together()

# 1. Upload your batch file
file_resp = client.files.upload(file="batch_input.jsonl", purpose="batch-api")

# 2. Create the batch job
batch = client.batches.create_batch(file_resp.id)
print(f"Batch created: {batch.id}")

# 3. Monitor progress
batch_status = client.batches.get_batch(batch.id)
print(f"Status: {batch_status.status}")

# 4. Retrieve results when complete
if batch_status.status == 'COMPLETED':
    # Download results using the output_file_id
    client.files.retrieve_content(id=batch_status.output_file_id, output="batch_output.jsonl")
    

Sample Input Format

    
{"custom_id": "req1", "body": {"model": "deepseek-ai/DeepSeek-V3", 
"messages": [{"role": "user", "content": "Explain quantum computing"}], "max_tokens": 200}}
{"custom_id": "req2", "body": {"model": "deepseek-ai/DeepSeek-V3", 
"messages": [{"role": "user", "content": "Tell me about San Francisco"}], "max_tokens": 200}}

Rate Limits & Scale

The Batch API operates with dedicated rate limits separate from your real-time usage:

  • Maximum tokens: 10 million tokens enqueued per model
  • Requests per batch: Up to 50,000 individual requests per batch file
  • File size limit: Maximum 100MB per batch input file
  • Separate rate pools: Batch processing doesn't consume your standard API rate limits

Pricing That Scales With You

  • Pay only for successful completions at an introductory 50% discount
  • No upfront commitments or minimum volumes
  • Same token-based pricing you're familiar with
  • Separate rate limits don't impact your real-time usage

Best Practices for Success

  • Optimal batch sizes: Aim for 1,000-10,000 requests per batch for best performance
  • Model selection: Use smaller models (7B-17B) for simple tasks, larger models (70B+) for complex reasoning
  • Error resilience: Always check the error file for any failed requests
  • Monitoring: Poll status every 30-60 seconds for updates

Getting Started

Getting started is easy:

  1. Upgrade to the latest version of the together python client
  2. Check out our Batch API documentation with code examples
  3. Start with our example cookbook
  4. Submit your first batch today and see the cost savings immediately

The Batch API is available now for all users. Start processing thousands of requests at half the cost.

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →