Model Library

Rime voice models now available on Together AI

High-performance enterprise TTS (text-to-speech) models with deterministic pronunciation and production-grade latency on dedicated and scalable infrastructure.

December 18, 2025

By 

Arielle Fidel, Rajas Bansal, Sahil Yadav, Rishabh Bhargava, Sonny Khan

Summary

  • Two enterprise-grade Rime models on Together AI: Arcana v2 for expressivity, Mist v2 for pronunciation control
  • Deterministic pronunciation: Define a word once via API, it renders the same across calls, channels, and voices
  • Proven at scale: Over a billion conversations powered for multi-national telecom, financial services, healthcare companies, and more
  • Dedicated GPU endpoints on Together AI: Co-located with LLM and STT behind a single API and control plane

A voice agent can be correct and still feel broken. Customers judge it like a phone call: if it hesitates, sounds synthetic, or mispronounces a key term, trust collapses before they can evaluate reasoning. In production, that experience comes down to a real-time loop: STT (speech-to-text) models transcribe speech, the LLM decides what to say, and TTS (text-to-speech) speaks the response. At scale, teams stitch that loop across multiple vendors, so latency, reliability, observability, and ultimately what the customer hears become difficult to manage end-to-end.

Starting today on Together AI, the AI Native Cloud, we're adding Rime Arcana v2 and Mist v2 to the Together Model Library, bringing proprietary TTS models into the same API, authentication, and observability surface you already use for LLM and speech workloads. Arcana v2 delivers expressive, conversational voices trained on real customer service interactions, with 40+ voices across multiple languages and regional dialects for quality-critical scenarios. Mist v2 brings deterministic pronunciation control to high-volume production environments, reaching about 225ms time-to-first-audio on Together AI dedicated endpoints—you define how a term sounds once via API and it renders consistently across all voices, flows, and channels. Both run as dedicated endpoints on a single cloud alongside your LLM and STT workloads, so your end-to-end voice stack operates on one production platform — instead of being split across multiple providers.

Rime Arcana v2 multilingual
English and Spanish code switching
0:00
The model learns natural breathing, fillers, and backchannel cues, y cambia al español de forma natural siguiendo el ritmo de conversaciones reales en producción.
Try now

Arcana v2: Expressivity for enterprise conversations

Arcana v2 is deployed today from high-growth startups to Fortune 500s as part of their production infrastructure. Across these environments, customers report measurable gains including 15% lift in sales at a national restaurant chain, a 75% reduction in call abandonment at a telecom provider, and a 10% increase in call success rates.

Trained on the largest proprietary dataset of full-duplex conversational speech data

Arcana v2 is trained on real conversations with everyday people — not audiobooks, podcasts, or voiceover announcers. The model learns natural breathing, fillers, backchannel cues, and conversational pacing from production conversations. Callers recognize these patterns and stay in the automated flow longer, improving completion and containment rates.

40+ voices and regional dialects

Arcana v2 ships with more than 40 voices across English, Spanish, French, and German. English includes 18 voices spanning U.K., Australian, and Southern US accents. Spanish includes four primary and three bilingual voices. Everyday words match local usage automatically. For example, "schedule" is pronounced "SHED-ule" in U.K. English and "SKED-ule" in U.S. English.

Rime Arcana v2
Real-time conversation
0:00
Gosh that's a tough one. Hmmm. Let's see here.
Try now

Mist v2: Deterministic pronunciation at production scale

Mist v2 is designed for high-volume production environments where pronunciation accuracy must be guaranteed across millions of calls. It already powers tens of millions of production calls each month for customer service and IVR systems where downtime or quality regression has direct revenue and compliance impact..

Deterministic pronunciation control

Most TTS models guess pronunciation on each generation. Mist v2 is deterministic. You define how a word should sound once through the API, and that pronunciation holds across more than 40 voices, flows, and channels. No retraining and no per vendor hacks. When your agent mispronounces a product name, drug, or acronym, you correct it once and the fix applies everywhere. Deterministic pronunciation configuration for Mist v2 is available today through our Sales team for production deployments; contact Sales to enable it for your environment.

English and Spanish with advanced pronunciation control

Mist v2 supports English and Spanish with deterministic pronunciation control. You specify how brand names, medication names, or technical terms should sound through the API, and Mist renders them consistently at conversational latency. If you need deterministic pronunciation at scale in Mist v2, contact Sales to enable it for your environment.

Proven at scale

Mist v2 serves tens of millions of calls monthly in production customer service and IVR environments. These are full-scale deployments where downtime or quality regression has direct revenue and compliance impact, not limited pilots.

Production-grade latency for conversational agents

Mist v2 reaches about 225ms p50 time-to-first-audio on Together AI dedicated endpoints. Voice agents need total end-to-end latency under 700ms to feel conversational, which means TTS must be fast enough to leave headroom for STT and LLM processing. When you co-locate Mist v2 with LLM and STT on Together AI, the entire pipeline from speech recognition through reasoning to synthesis stays within that budget, directly improving completion rates and user satisfaction.

Conversational realism

Like Arcana v2, Mist v2 is trained on real customer service calls. It preserves natural filler words, backchanneling, breathing patterns, and pacing while maintaining production-grade throughput. This makes it suitable for high-volume scenarios where both realism and responsiveness are required

Rime Mist v2
Medical Terms
0:00
Next time you're talking to your voice agent, ask it to pronounce acetaminophen and see if it's correct
Try now

Use cases

Global contact centers

Global teams can mix Arcana v2 and Mist v2 inside the same environment. Arcana v2 handles quality critical interactions like sales and complex support. Mist v2 handles high volume flows including basic inquiries and IVR routing. You can swap models with a configuration change, and keep configurations and observability unified through Together AI.

Real-time customer service

High-volume support flows need TTS latency under 250ms to feel conversational, with total end-to-end pipeline (STT → LLM → TTS) under 700ms. Mist v2 meets both thresholds when co-located with LLM and STT on Together AI, removing multi-vendor network overhead and keeping the pipeline inside a single environment.

Healthcare voice agents

Medication names like "lisinopril," "atorvastatin," and "metformin" must be pronounced correctly every time. Mist v2 uses deterministic pronunciation, so you define these terms once and they render correctly across 40+ voices. Running on Together AI HIPAA-compliant infrastructure means a single compliance review can cover the full voice stack.

Voice banking

Account numbers, routing numbers, and product names need to be read clearly and consistently across millions of calls. Rime’s models are trained on customer service conversations and are built for these high-precision use cases. On Together AI, banks and financial institutions can deploy Rime’s TTS models on SOC 2 Type II and PCI compliant infrastructure that meets their regulatory  requirements.


Production infrastructure on Together AI

Both Rime models run on Together AI Dedicated Endpoints on isolated GPU capacity alongside LLM and STT workloads. Together AI offers the broadest TTS catalog on a single platform, from open-source models like Orpheus and Kokoro to elite proprietary models like Rime, all with unified tooling.

The platform is built for production AI, with:

Infrastructure

  • ✔ Dedicated GPU capacity with isolated workloads

  • ✔ 99.9% uptime SLA

  • ✔ SOC 2 Type II, HIPAA ready, PCI compliant

  • ✔ Global data centers

  • ✔ WebSocket streaming support

  • ✔ Zero data retention and full data ownership and control

Developer experience

  • ✔ Same SDKs and authentication as LLM and STT endpoints

  • ✔ Unified pronunciation API across Arcana v2 and Mist v2

  • ✔ Single observability and logging surface for entire voice pipeline

  • ✔ Model selection and swapping via configuration

  • ✔ Professional voice cloning services available

  • ✔ Batch processing for high-volume workflows


Get started

→ Try both models now

→ Read TTS Documentation

Contact Sales for deterministic pronunciation control, dedicated deployment, and volume pricing

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Audio Name

Audio Description

0:00

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →