This website uses cookies to anonymously analyze website traffic using Google Analytics.
Company

Introducing the Together AI Batch API: Process Thousands of LLM Requests at 50% Lower Cost

June 11, 2025

By 

TOGETHER AI

We're excited to announce the launch of our Batch API, a solution for businesses and developers who need to process large volumes of LLM requests efficiently and cost-effectively. Whether you're running evaluations, classifying large datasets, generating marketing content, or processing data transformations, the Batch API delivers enterprise-grade performance at half the cost of real-time inference.

Why Batch Processing?

Not all AI workloads require immediate responses. Many use cases—from synthetic data generation to offline summarization—can wait hours for results. By processing these requests asynchronously during off-peak times, we can offer the same high-quality outputs at significantly reduced costs while maintaining the reliability you depend on. Most batches complete within hours, with a best-effort 24-hour processing window.

Key Benefits

50% Cost Savings

Process your non-urgent workloads with introductory pricing at half the cost of real-time API calls. Scale your AI inference without scaling your budget.

Large Scale Processing

Submit up to 50,000 requests in a single batch file (up to 100MB). Batch rate limits are independent and separate from your real-time usage.

Best-effort completion within 24 hours with real-time progress tracking through multiple status stages—from validation to completion.

Simple Integration

Upload a JSONL file with your requests. Monitor progress through the Batch API and download results when complete.

Supported Models

Launched with support for 15 cutting-edge models:

Model ID Size
deepseek-ai/DeepSeek-R1 685B
deepseek-ai/DeepSeek-V3 671B
meta-llama/Llama-3-70b-chat-hf 70B
meta-llama/Llama-3.3-70B-Instruct-Turbo 70B
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 17B
meta-llama/Llama-4-Scout-17B-16E-Instruct 17B
meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo 405B
meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo 70B
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo 8B
mistralai/Mistral-7B-Instruct-v0.1 7B
mistralai/Mixtral-8x7B-Instruct-v0.1 8×7B
Qwen/Qwen2.5-72B-Instruct-Turbo 72B
Qwen/Qwen2.5-7B-Instruct-Turbo 7B
Qwen/Qwen3-235B-A22B-fp8-tput 235B
Qwen/QwQ-32B 32B

How It Works

  1. Prepare Your Requests: Format your requests in a JSONL file, with each line containing a single request with a unique identifier
  2. Upload & Submit: Use our Files API to upload your batch and create the job
  3. Monitor Progress: Track your job through validation, queuing, processing, and aggregation stages
  4. Download Results: Retrieve your completed results in a structured format, with failed requests detailed in a separate error file
    
# Upgrade to the latest together python package: pip install --upgrade together

from together import Together

client = Together()

# 1. Upload your batch file
file_resp = client.files.upload(file="batch_input.jsonl", purpose="batch-api")

# 2. Create the batch job
batch = client.batches.create_batch(file_resp.id)
print(f"Batch created: {batch.id}")

# 3. Monitor progress
batch_status = client.batches.get_batch(batch.id)
print(f"Status: {batch_status.status}")

# 4. Retrieve results when complete
if batch_status.status == 'COMPLETED':
    # Download results using the output_file_id
    client.files.retrieve_content(id=batch_status.output_file_id, output="batch_output.jsonl")
    

Sample Input Format

    
{"custom_id": "req1", "body": {"model": "deepseek-ai/DeepSeek-V3", 
"messages": [{"role": "user", "content": "Explain quantum computing"}], "max_tokens": 200}}
{"custom_id": "req2", "body": {"model": "deepseek-ai/DeepSeek-V3", 
"messages": [{"role": "user", "content": "Tell me about San Francisco"}], "max_tokens": 200}}

Rate Limits & Scale

The Batch API operates with dedicated rate limits separate from your real-time usage:

  • Maximum tokens: 10 million tokens enqueued per model
  • Requests per batch: Up to 50,000 individual requests per batch file
  • File size limit: Maximum 100MB per batch input file
  • Separate rate pools: Batch processing doesn't consume your standard API rate limits

Pricing That Scales With You

  • Pay only for successful completions at an introductory 50% discount
  • No upfront commitments or minimum volumes
  • Same token-based pricing you're familiar with
  • Separate rate limits don't impact your real-time usage

Best Practices for Success

  • Optimal batch sizes: Aim for 1,000-10,000 requests per batch for best performance
  • Model selection: Use smaller models (7B-17B) for simple tasks, larger models (70B+) for complex reasoning
  • Error resilience: Always check the error file for any failed requests
  • Monitoring: Poll status every 30-60 seconds for updates

Getting Started

Getting started is easy:

  1. Upgrade to the latest version of the together python client
  2. Check out our Batch API documentation with code examples
  3. Start with our example cookbook
  4. Submit your first batch today and see the cost savings immediately

The Batch API is available now for all users. Start processing thousands of requests at half the cost.

  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

No items found.
Start
building
yours
here →