This website uses cookies to anonymously analyze website traffic using Google Analytics.
Company

Together AI Launches Speech-to-Text: High-Performance Whisper APIs

July 10, 2025

By 

Together AI

Together AI's First Voice Offering: High-Performance Transcription at Scale

Today marks an important expansion for Together AI. We're launching our speech-to-text APIs that solve the fundamental problem holding back voice applications: speed of high-quality transcription and translation.

Most developers building voice features hit the same wall. Existing transcription services are simply too slow for real-world applications. For longer audio, they're forced into complex chunking workflows that introduce errors and degrade quality. When audio processing becomes a bottleneck, entire categories of applications become impossible.

Performance That Changes What You Can Build

Our Whisper V3 Large deployment delivers transcription 15x faster than OpenAI while maintaining full accuracy. This performance comes from several key optimizations: smart voice activity detection using Silero for precise audio segmentation, intelligent chunking and batching strategies for longer audio files, and engine improvements to the Whisper model itself that maximize GPU utilization.

This isn't just a technical improvement—it's the difference between transcription as a batch process and transcription as a building block for real-time applications.

Consider what becomes possible when transcription happens in seconds rather than minutes. 

  • Customer support calls analyzed in real-time
  • Meeting insights delivered before participants leave the room
  • Voice agents that respond naturally instead of asking users to wait
  • Medical scribes that keep pace with doctor-patient conversations

We've also eliminated the practical limitations other services impose. While OpenAI caps uploads at 25MB, we handle files exceeding 1GB. Our infrastructure processes 30+ minute calls seamlessly at $0.015 per audio minute - delivering substantial cost savings for high-volume applications.

Production-Ready API Design

Our speech-to-text APIs ship with capabilities designed for real deployment scenarios:

  • Enterprise-scale file handling - process files exceeding 1GB compared to OpenAI's 25MB limit, with support for 30+ minute audio without chunking
  • Superior word-level alignment - advanced model delivers the highest quality timestamps available, outperforming OpenAI
  • Comprehensive language support - transcription and translation across 50+ languages with automatic detection
  • Dedicated endpoints - reserved GPU capacity for sub-second processing speeds beyond our already-fast serverless offering
  • Batch processing - handle large async workloads with consistent performance for high-volume applications

Our interactive playground lets you test transcription quality immediately with your own audio files. No setup required, no complex integration to validate fit. Upload, process, see results in real-time.

Building Toward Complete Voice Infrastructure

Voice AI applications in education, customer success, and interactive agents all face the same fundamental challenge: accumulated latency and quality issues across fragmented speech pipelines. When transcription, reasoning, and response generation happen across multiple providers, the delays compound into user experiences that feel sluggish and unnatural.

Many Together AI customers already use our LLM APIs for conversational applications, from customer support automation to educational tools. They've been requesting voice capabilities to make these experiences more natural and accessible. Adding high-performance speech-to-text establishes the foundation for voice-enabled applications while eliminating a major bottleneck.

Available Now

Our speech-to-text APIs are live today through our standard endpoints. Existing customers can add transcription using the same authentication and billing they're familiar with. We've designed for compatibility with existing Whisper integrations, minimizing migration effort.

Visit our interactive playground to test with your audio files, review our speech-to-text documentation for integration details, and explore our transcription and translation API references. Experience transcription that actually works at application scale - the future of voice applications isn't limited by transcription speed anymore.

Use our Python SDK to quickly integrate Whisper into your applications:

from together import Together

# Initialize the client
client = Together()

# Basic transcription
response = client.audio.transcriptions.create(
    file="path/to/audio.mp3",
    model="openai/whisper-large-v3",
    language="en"
)
print(response.text)

# Basic translation
response = client.audio.translations.create(
    file="path/to/foreign_audio.mp3",
    model="openai/whisper-large-v3"
)
print(response.text)
  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

No items found.
Start
building
yours
here →