Introducing the Together AI Batch API: Process Thousands of LLM Requests at 50% Lower Cost
We're excited to announce the launch of our Batch API, a solution for businesses and developers who need to process large volumes of LLM requests efficiently and cost-effectively. Whether you're running evaluations, classifying large datasets, generating marketing content, or processing data transformations, the Batch API delivers enterprise-grade performance at half the cost of real-time inference.
Why Batch Processing?
Not all AI workloads require immediate responses. Many use cases—from synthetic data generation to offline summarization—can wait hours for results. By processing these requests asynchronously during off-peak times, we can offer the same high-quality outputs at significantly reduced costs while maintaining the reliability you depend on. Most batches complete within hours, with a best-effort 24-hour processing window.
Key Benefits
50% Cost Savings
Process your non-urgent workloads with introductory pricing at half the cost of real-time API calls. Scale your AI inference without scaling your budget.
Large Scale Processing
Submit up to 50,000 requests in a single batch file (up to 100MB). Batch rate limits are independent and separate from your real-time usage.
Best-effort completion within 24 hours with real-time progress tracking through multiple status stages—from validation to completion.
Simple Integration
Upload a JSONL file with your requests. Monitor progress through the Batch API and download results when complete.
Supported Models
Launched with support for 15 cutting-edge models:
How It Works
- Prepare Your Requests: Format your requests in a JSONL file, with each line containing a single request with a unique identifier
- Upload & Submit: Use our Files API to upload your batch and create the job
- Monitor Progress: Track your job through validation, queuing, processing, and aggregation stages
- Download Results: Retrieve your completed results in a structured format, with failed requests detailed in a separate error file
Sample Input Format
Rate Limits & Scale
The Batch API operates with dedicated rate limits separate from your real-time usage:
- Maximum tokens: 10 million tokens enqueued per model
- Requests per batch: Up to 50,000 individual requests per batch file
- File size limit: Maximum 100MB per batch input file
- Separate rate pools: Batch processing doesn't consume your standard API rate limits
Pricing That Scales With You
- Pay only for successful completions at an introductory 50% discount
- No upfront commitments or minimum volumes
- Same token-based pricing you're familiar with
- Separate rate limits don't impact your real-time usage
Best Practices for Success
- Optimal batch sizes: Aim for 1,000-10,000 requests per batch for best performance
- Model selection: Use smaller models (7B-17B) for simple tasks, larger models (70B+) for complex reasoning
- Error resilience: Always check the error file for any failed requests
- Monitoring: Poll status every 30-60 seconds for updates
Getting Started
Getting started is easy:
- Upgrade to the latest version of the
together
python client - Check out our Batch API documentation with code examples
- Start with our example cookbook
- Submit your first batch today and see the cost savings immediately
The Batch API is available now for all users. Start processing thousands of requests at half the cost.
- Lower
Cost20% - faster
training4x - network
compression117x
Q: Should I use the RedPajama-V2 Dataset out of the box?
RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.
article