Model Library

DeepSeek-V3.1: Hybrid Thinking Model Now Available on Together AI

August 27, 2025

By 

Together AI

Deploy the hybrid model with both fast response and deep reasoning modes - now production-ready on Together's infrastructure

Starting today on Together AI, you can access DeepSeek-V3.1 — the hybrid model that supports both fast responses and deep reasoning modes through configurable chat templates. Choose non-thinking mode for speed or thinking mode for complex analysis.

TL;DR:

  • DeepSeek-V3.1 API on Together AI: Choose between fast responses and deep reasoning modes in one model
  • Efficiency breakthrough: Comparable quality to DeepSeek-R1 but significantly faster, making deep reasoning practical for production
  • Built-in agent support: Native code and search agent capabilities with specialized tool-calling workflows optimized for production use on Together's optimized infrastructure
  • Available now on Together AI: Serverless APIs with enterprise reliability and fine-tuning capabilities

How the Hybrid Model Works

DeepSeek-V3.1 operates in two modes through configurable chat templates. You can select non-thinking mode for tasks requiring fast responses or thinking mode for problems needing step-by-step analysis.

DeepSeek-V3.1's thinking mode achieves comparable answer quality to DeepSeek-R1 while responding significantly faster, making deep reasoning practical for production applications.

⚡ Non-Thinking Mode
Fast responses for routine tasks
Code completion, simple queries, API calls
🧠 Thinking Mode
Deep reasoning for complex problems
Debugging, analysis, multi-step workflows

The model includes built-in support for code agents and search agents, with specialized formatting optimized for multi-turn tool-calling workflows. You control mode selection through the chat template configuration.

Built on DeepSeek-V3.1-Base through substantial long-context extension training - 630B tokens for 32K context and 209B tokens for 128K context, ensuring robust performance across extended conversations and large codebases.

Real Applications for Hybrid Intelligence

DeepSeek-V3.1's hybrid architecture delivers measurable improvements across established coding and agent tasks:

💻 Code Agents
⚡ Non-thinking: Generate API endpoints, write unit tests, fix syntax errors
🧠 Thinking: Debug distributed system failures, architect microservices decomposition, design database migration strategies with zero-downtime requirements
🔍 Search Agents
⚡ Non-thinking: Basic fact lookup, simple database queries, standard search operations
🧠 Thinking: Multi-step research across enterprise knowledge bases, correlate information from multiple data sources, design complex search workflows with filtering and analysis
📄 Document Processing
⚡ Non-thinking: Extract entities from contracts, classify document types, basic text parsing
🧠 Thinking: Analyze multi-document compliance workflows, cross-reference contract terms with regulatory requirements, synthesize insights from large document collections

The hybrid model allows you to choose the appropriate cognitive mode for your task complexity, eliminating the need to route between different specialized models for these workflows.

Performance Across Both Modes

📊 Benchmark Non-Thinking Thinking Difference
🎯 MMLU-Redux 91.8% 93.7% +1.9%
🔬 GPQA-Diamond 74.9% 80.1% +5.2%
💻 LiveCodeBench 56.4% 74.8% +18.4%
🧮 AIME 2024 66.3% 93.1% +26.8%

The performance differences show where each mode excels. Non-thinking mode handles routine tasks efficiently, while thinking mode provides substantial improvements on complex problems requiring multi-step  reasoning.

Production Deployment on Together AI

The new DeepSeek hybrid model is available both through our DeepSeek-V3.1 serverless API and Dedicated Endpoints.

⚡ Technical Specs
✓ 671B total parameters
✓ 37B active per token
✓ 128K context (extended training)
✓ MIT licensed
🌍 Infrastructure
✓ 99.9% uptime SLA
✓ North American data centers
✓ Serverless scaling
✓ SOC 2 compliant
🛠 Developer Tools
✓ OpenAI-compatible APIs
✓ Fine-tuning available
✓ Batch processing
✓ Custom endpoints

Together AI's optimizations ensure both thinking and non-thinking modes perform reliably under production workloads. Our NVIDIA GPU infrastructure is specifically tuned for large mixture-of-experts models like DeepSeek-V3.1, with transparent pricing that scales with your usage patterns.

Getting Started

DeepSeek-V3.1 on Together AI provides both responsive interaction and deep reasoning capabilities and can be deployed immediately through Together AI's production APIs:

Use our Python SDK to quickly integrate DeepSeek-V3.1 into your applications:

    
      from together import Together

      client = Together()

      response = client.chat.completions.create(
          model="deepseek-ai/DeepSeek-V3.1",
          messages=[],
          stream=True
      )
      for token in response:
          if hasattr(token, 'choices'):
              print(token.choices[0].delta.content, end='', flush=True)
    

Start building today:

{{custom-cta-1}}

Ready to dive in?

Follow our step-by-step Quickstart to install, authenticate, and run your first GPT-OSS inference in minutes.

Try DeepSeek V3.1

Contact us to discuss enterprise deployments, custom integrations, or volume pricing for Deepseek V3.1

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →