Model Library

Published 3/11/2026

Together AI Brings NVIDIA Nemotron 3 to Developers on Day 0

We’re excited to bring NVIDIA Nemotron 3 Super to Together AI, the AI Native Cloud. Built for multi-agent orchestration and complex reasoning, Nemotron 3 Super is a 120B-parameter (12B active) hybrid model that combines Transformer and Mamba architectures.

Running Nemotron 3 Super on Together AI Dedicated Inference allows engineering teams to deploy this open-weights model on managed infrastructure designed for high-throughput inference workloads.

Architectural capabilities for agentic workflows

Modern agentic systems that analyze massive document stores or orchestrate multi-step planning require models that can maintain state across long contexts without sacrificing generation speed. Nemotron 3 Super introduces several architectural innovations that make it well suited for these workloads:

  • Hybrid MoE Architecture (Transformer + Mamba): By combining Mamba’s efficient sequence processing with Transformer attention, the model maintains strong reasoning capability while keeping active parameters (12B out of 120B) manageable for faster inference. Its Latent MoE design enables the model to call four experts for the inference cost of one, improving efficiency for reasoning-heavy workloads.
  • 1M-Token Context Window: The 1-million-token context length allows applications to process entire codebases, maintain state across long agent trajectories, and inject significantly larger retrieval payloads directly into prompts.
  • Multi-Token Prediction: Nemotron 3 Super is trained to generate several tokens simultaneously in a single forward pass. For applications that produce large outputs such as code generation or structured responses, this drastically reduces generation latency, delivering over 50% higher token generation speeds compared to current leading open models.

To achieve leading accuracy across benchmarks like AIME 2025 and SWE Bench verified, the model was trained using multi-environment reinforcement learning (RL) and NVIDIA-generated high-quality synthetic data. Because NVIDIA provides the model with open weights, datasets, and development recipes, engineering teams maintain full control to customize and fine-tune the model for their specific environments.

Running Nemotron 3 Super on Together AI

Serving a 120B-parameter hybrid model with a 1M-token context window typically requires distributed compute across multiple nodes. Nemotron 3 Super is available through Together AI Dedicated Inference, offering an infrastructure environment tailored for both experimentation and production scale without the overhead of GPU provisioning:

  • Single-GPU Deployment: The model is optimized to run collaborating agents on a single GPU footprint, supporting deployment on single NVIDIA H200 or H100 GPUs. Together AI handles the underlying infrastructure orchestration, allowing teams to deploy these workloads without provisioning or managing GPUs directly.
  • Research-Optimized Performance: Running hybrid MoE architectures efficiently requires highly tuned serving software. Together AI accelerates model execution through the Together Inference Engine and custom CUDA kernels. This stack helps teams achieve lower latency and higher throughput during live inference.
  • Production-Grade Isolation: Dedicated Inference isolates workloads on reserved hardware to support predictable throughput and consistent performance at scale. The platform operates on enterprise-ready infrastructure, including a 99.9% uptime SLA and SOC 2 compliance.

Get Started

Developers can begin building with Nemotron 3 Super on Together AI today.

Run large-context reasoning workloads, deploy multi-agent systems, and scale production reference without managing GPU infrastructure.

FAQ

What is NVIDIA Nemotron 3 Super? 

NVIDIA Nemotron 3 Super is a hybrid Mixture-of-Experts (MoE) reasoning model designed for complex AI workflows and multi-step problem solving. It combines Transformer and Mamba components to deliver strong reasoning capability with efficient inference.

What architecture does Nemotron 3 Super use? 

Nemotron 3 Super uses a hybrid Mixture-of-Experts architecture that combines Transformer attention with Mamba sequence processing. This design improves compute efficiency while maintaining strong reasoning performance.

What context length does Nemotron 3 Super support? 

Nemotron 3 Super supports context windows of up to 1 million tokens, enabling applications to analyze large document collections, maintain long conversations, and incorporate extensive retrieval context into reasoning workflows.

What types of applications can use Nemotron 3 Super? 

Nemotron 3 Super is well suited for applications that coordinate multiple agents or operate across large knowledge sources. Examples include developer assistants that analyze and refactor codebases, enterprise systems that process large document collections, cybersecurity workflows that triage vulnerabilities or analyze system logs, and orchestration systems that route tasks across specialized agents based on user intent.

How do developers run Nemotron 3 Super on Together AI?

Nemotron 3 Super is deployed on Together AI through Dedicated Model Inference. Dedicated deployments allow teams to run models on reserved infrastructure designed for production workloads with predictable performance.

Do developers need to manage GPUs?

No. Together AI manages the underlying infrastructure, allowing developers to deploy and scale AI workloads without provisioning GPU resources directly.

Why use Together AI for these workloads?

Together AI provides infrastructure designed for large-scale AI systems, including reliable inference, serverless scaling, and managed infrastructure for modern AI applications.

8S
DeepSeek R1
Premium cinematic video generation with native audio and lifelike physics.
$2.40
Try now
DeepSeek R1
8S

Audio Name

Audio Description

0:00
Premium cinematic video generation with native audio and lifelike physics.
$2.40
Try now
8S
DeepSeek R1
Premium cinematic video generation with native audio and lifelike physics.
$2.40/video (720p/8s)
Try now

Performance & Scale

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Infrastructure

Best for

  • Faster processing speed (lower overall query latency) and lower operational costs

  • Execution of clearly defined, straightforward tasks

  • Function calling, JSON mode or other well structured tasks

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet