AI has spent years in the spotlight for training: the massive, GPU-intensive process of building models. But for most teams deploying AI today, ongoing inference costs are what actually shape their unit economics. Estimates put inference at 80-90% of the total lifetime cost of a production AI system, simply because it runs continuously across every user query, agent step, and API call. And while training is a bounded investment, inference scales with every new user and use case you ship.
At NVIDIA GTC 2026, NVIDIA CEO Jensen Huang framed this shift plainly: “People pay for information, but people mostly pay for work. Agentic systems get work done.” That shift from AI as a novelty to AI as a workhorse is exactly what’s reshaping infrastructure priorities.
For Together AI, none of this is new. The inference imperative is what we’ve been building for. Our CTO Ce Zhang covered these dynamics in depth at GTC, sharing hard-won lessons from running some of the most demanding production inference workloads in the industry.
Why inference is a different kind of hard
Inference isn’t just “running the model.” In production, it’s an optimization problem across multiple competing dimensions simultaneously:
- Latency shapes what’s possible to build. For applications like coding assistants, real-time support, or conversational agents, sub-500ms response times aren’t a nice-to-have — they determine whether a product feels like software or like waiting. Agentic workflows amplify this: five model calls at 200ms each is a full second of accumulated latency before a user sees a result. The threshold matters, and missing it has product consequences.
- Throughput determines your unit economics. AI-native companies face a structurally different cost profile than traditional SaaS. Where legacy software companies target 80-90% gross margins, AI companies commonly operate at 50-60%, with inference alone accounting for roughly 23% of revenue at scaling-stage companies. Faster inference means more requests served per GPU-hour. That math flows directly to margins.
- The model landscape keeps changing. The inference stack optimized for today’s models may need significant rework tomorrow. New architectures, quantization methods, and hardware; staying at the frontier requires continuous investment across the full stack, not just one-time optimization.
- Concurrency is unforgiving. Serving thousands of simultaneous users means navigating wildly different context lengths, latency requirements, and cost profiles — all at once, without degradation. That’s as much a scheduling and orchestration challenge as it is a compute one.
This is also why the stakes are higher than most teams initially expect.
How Together approaches inference
Together’s approach to inference isn’t a single optimization. It’s a compounding stack of research, systems engineering, and hardware expertise designed to improve continuously as the frontier moves:
- Research that ships to production. The Together Research team has contributed some of the most widely adopted advances in inference efficiency: FlashAttention, now up to FlashAttention-4 (up to 1.3x faster than cuDNN on NVIDIA Blackwell), ThunderKittens, and ATLAS, our adaptive speculative decoding system delivering up to 4x faster LLM inference. This research goes into production for customers, typically within weeks of publication.
- Adaptive speculative decoding. Standard speculative decoding uses a smaller draft model to propose tokens that a larger model verifies in parallel, delivering 1.5-3x speedups on predictable workloads like code completion or structured outputs. Our ATLAS and Aurora systems go further: Aurora is an open-source RL-based framework that learns from live inference traces in real time, adapting as traffic patterns shift. It achieves meaningful speedups over even well-trained static speculators, without interrupting serving.
- Full-stack hardware optimization. Running on the latest NVIDIA Blackwell hardware (GB200 NVL72, HGX B200) means building custom parallelism strategies across 72-GPU meshes, implementing NVFP4 quantization, and constructing weights-to-production pipelines that get model releases live within days. When Cursor needed production-grade latency for millions of active developers, Together AI built the full-stack infrastructure to make it work, handling strict latency SLAs across unpredictable, high-concurrency traffic.
- Intelligent scheduling and batching. High-throughput inference requires making smart real-time decisions: which requests to batch together, how to route based on context length and latency requirements, and when to trade throughput for responsiveness. Together’s inference engine handles this dynamically, extracting maximum efficiency from each GPU-hour without sacrificing the experience that AI-native apps and products depend on.
The economics of getting this right
The Stanford 2025 AI Index documents a remarkable trend: inference costs for GPT-3.5-level performance dropped more than 280-fold between late 2022 and late 2024. But total inference spend is rising; as costs fall, teams deploy AI for more use cases, users, and agent steps. Lower costs per token haven’t reduced the infrastructure challenge; they’ve expanded the surface area of it. As the industry converges on lower token cost as a real indicator of AI infrastructure's TCO, Together AI’s approach of optimizing the full hardware and software stack continues to deliver better profitability for customers.
For AI-native companies, this makes inference optimization a compounding advantage. Run inference 2x more efficiently and you serve more customers on the same hardware, while also opening up use cases that weren’t previously viable. Each gain in efficiency improvement not only flows directly to margins, but also what you’re able to build over time.
That’s what Together AI prides itself on: a platform that isn’t just fast inference, but the infrastructure layer that empowers AI-native teams to grow without their costs growing faster than their revenue.
Run production-scale inference on the AI Native Cloud
Together AI is the AI Native Cloud, offering a full-stack AI platform across Serverless & Dedicated Inference, Accelerated Compute, and Model Shaping — empowering you to get more value out of every GPU-hour, without sacrificing the speed and production-grade reliability users expect.
Inference isn’t a fringe concern. For teams building AI-native apps today, it’s the thing that will shape margins, product roadmaps, and the ability to compete. The good news: the tools to tackle it on the AI Native Cloud have never been better.
Ready to build what’s next on Together AI? Get started today.
_____________________________________________________________________________________
Want to go deeper? Our best practices guide for production inference cover speculative decoding, optimized kernels, quantization, and hardware acceleration in detail.
FAQ
What is AI inference at scale?
AI inference is the process of running a trained model to generate a response — every time a user sends a message, triggers an agent, or makes an API call. At scale, this means serving thousands or millions of simultaneous requests, each with different context lengths, latency requirements, and cost profiles. The infrastructure challenge isn't just compute — it's orchestrating all of that efficiently, continuously, without degrading speed or reliability for any individual user.
Why is AI inference more expensive than training?
Training is an intensive but bounded investment — it happens once (or periodically when a model is updated). Inference, by contrast, runs continuously: every user interaction, every agent step, every API call generates a cost. Industry estimates put inference at 80-90% of the total lifetime cost of a production AI system. As usage grows, so does the bill. For AI-native companies, inference is effectively the cost of goods sold — it scales directly with revenue.
What is speculative decoding?
Speculative decoding is an inference acceleration technique where a smaller, faster "draft" model proposes several tokens at once, which a larger target model then verifies in parallel. Tokens that match are accepted; the rest are discarded and regenerated. When the draft model is well-aligned with the target, this approach can deliver 1.5–3x speedups without changing the output. It's particularly effective for predictable workloads like code completion or structured data generation. Together AI's ATLAS system extends this further with adaptive speculative decoding that learns and adjusts from live traffic in real time.
What is adaptive speculative decoding?
Standard speculative decoding relies on a static draft model — one trained offline and fixed at deployment. The problem is that real-world traffic patterns shift constantly, and a static draft model's accuracy degrades as the domain changes. Adaptive speculative decoding solves this by continuously learning from live inference traces, updating the draft model without interrupting serving. Together AI's Aurora framework is an open-source, RL-based implementation that achieves meaningful speedups over well-trained static speculators, even when starting from scratch.
What does "inference research" mean in the context of AI?
Inference research is the field of study focused on making AI models run faster, cheaper, and more efficiently in production — without sacrificing output quality. It encompasses algorithm-level work (like speculative decoding and attention optimization), systems-level work (like kernel engineering and request scheduling), and hardware-level work (like quantization and GPU utilization). It's distinct from model research, which focuses on improving what models know or can do. As inference costs become the dominant expense in AI deployment, inference research has become one of the highest-leverage areas in applied AI.
How does inference optimization affect AI product economics?
Inference optimization directly improves unit economics: faster inference means more requests served per GPU-hour, which translates to lower cost per request. At scale, even modest efficiency gains compound significantly — a 2x improvement in throughput effectively halves infrastructure costs for the same workload. This matters for product teams because it determines what use cases are economically viable, how quickly margins improve as volume grows, and whether a product can sustain competitive pricing as the market matures.

Audio Name
Audio Description

Performance & Scale
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
Infrastructure
Best for
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:
Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:
Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.
- A. 2.08*1e-1 m
- B. 2.08*1e-9 m
- C. 2.08*1e-6 m
- D. 2.08*1e-3 m
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:
Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by
- A. releasing nitrogen in the soil.
- B. crowding out non-native species.
- C. adding carbon dioxide to the atmosphere.
- D. removing water from the soil and returning it to the atmosphere.
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:
Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:
Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Audio Name
Audio Description

Performance & Scale
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
Infrastructure
Best for
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:
Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:
Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.
- A. 2.08*1e-1 m
- B. 2.08*1e-9 m
- C. 2.08*1e-6 m
- D. 2.08*1e-3 m
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:
Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by
- A. releasing nitrogen in the soil.
- B. crowding out non-native species.
- C. adding carbon dioxide to the atmosphere.
- D. removing water from the soil and returning it to the atmosphere.
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:
Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:
Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?