Together Code Interpreter: execute LLM-generated code seamlessly with a simple API call
Today, we introduced Together Code Interpreter (TCI): an API that lets you run code generated by LLMs and get an instant response. With TCI, you can build smarter apps with richer responses.
Millions of developers and businesses are using LLMs to generate code and configure agentic workflows. However, while LLMs excel at generating code, they can’t execute it, leaving developers to manually test, debug, and implement the generated code in separate environments.
Together Code Interpreter solves this shortcoming of LLMs with a straightforward approach to securely execute LLM-generated code at scale, simplifying agentic workflow development and opening new possibilities for reinforced learning operations.
In this deep dive blog post, we will explore how you can now use our Python library to start sessions to execute Python code and discuss the key applications of TCI.
Configuring agentic workflows at scale
As agentic workflow development becomes a priority for businesses that want to leverage the benefits of autonomous task management, these AI pioneers need a fast way to run LLM-generated code without having to do all the plumbing required for advanced sandbox tools. That’s why we built Together Code Interpreter as a straightforward API that:
- Takes LLM-generated code as input.
- Creates a session to execute that code in a secure, fast sandboxed environment.
- Outputs the result of the code execution (stdout, stderr)
The output can then be fed back to the LLM for continuous iteration in a closed-loop agentic workflow system, ultimately allowing LLMs to output richer responses to users.

A good example of this is asking an LLM like Qwen Coder 32B to draw a chart. While the LLM will go to some lengths to attempt to represent this chart in plain text, it cannot execute the code to output an actual chart. When we allow the LLM to use Together Code Interpreter, it can generate Python code, execute it, and output an image of the chart back to the user.
Enhancing reinforcement learning
Because of its ability to quickly execute code and output a result, Together Code Interpreter has generated a lot of interest from ML teams training models with reinforcement learning (RL).
TCI plays a critical role by executing model-generated code during training, enabling automated evaluation through rigorous unit testing. During each RL iteration, batches are evaluated across extensive problem sets—often involving over a thousand individual unit tests executed simultaneously. TCI effortlessly scales to handle hundreds of concurrent sandbox executions, providing secure environments that isolate the execution, expose standard input/output interfaces (stdin, stdout, and evaluated output), and integrate seamlessly into existing RL workflows.
{{custom-cta-1}}
We're particularly proud to partner with Agentica, an open-source initiative from Berkeley AI Research and Sky Computing Lab, to integrate TCI into their reinforcement learning operations. Agentica utilized TCI to run unit tests across batches of 1024 coding problems simultaneously, significantly accelerating their training cycles at low cost of 3¢ per problem, while improving model accuracy through a rigorous sparse Outcome Reward Model that assigns a full reward only if all 15 sampled unit tests pass, and none if even one test fails or the output is incorrectly formatted.
The resulting model, DeepCoder-14B-Preview, achieved an impressive 60.6% Pass@1 accuracy on LiveCodeBench, matching the performance of o3-mini-2025-01-031 (Low) and o1-2024-12-17 with just 14B parameters—a fantastic testament to the massive impact of code interpretation during RL operations.
"Together Code Interpreter has dramatically accelerated our RL post-training cycles, enabling us to reliably scale to over 100 concurrent coding sandboxes and run thousands of code evaluations per minute. Its reliable and scalable infrastructure has proven invaluable." — Michael Luo & Sijun Tan, Project lead at Agentica
We are excited to continue supporting ML teams like Agentica, which push the forefront of advanced LLMs for coding. For detailed integration instructions, check out Agentica’s open-source repo.
Ready for scale
To make it easier for developers to leverage TCI at scale regardless of their use case, we have introduced the concept of “sessions” as the unit of measurement for TCI usage and billing. A session represents an active code execution environment that can be called to execute code. Each session has a lifespan of 60 minutes and can be called multiple times for several different code execution jobs. To simplify billing, we are pricing TCI usage at $0.03/session.
Sessions allow users to build on prior executions. By referencing the session ID in the request, you can reference the same variables over multiple requests.
For more information about sessions, TCI billing, and rate limits, please check our Docs.
MCP support
We’re also launching with MCP support! The Together Code Interpreter MCP Server can be accessed on Smithery. This lets you add code interpreting abilities to any MCP client like Cursor, Windsurf, or your own chat app.
Get started
You can start using Together Code Interpreter today by using our Python SDK or our API.
Don’t miss our docs and our cookbook to get up and running with your first TCI instance in minutes!
- Lower
Cost20% - faster
training4x - network
compression117x
Q: Should I use the RedPajama-V2 Dataset out of the box?
RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.