DeepSeek-V3.1: Hybrid Thinking Model Now Available on Together AI
Deploy the hybrid model with both fast response and deep reasoning modes - now production-ready on Together's infrastructure
Starting today on Together AI, you can access DeepSeek-V3.1 — the hybrid model that supports both fast responses and deep reasoning modes through configurable chat templates. Choose non-thinking mode for speed or thinking mode for complex analysis.
TL;DR:
- DeepSeek-V3.1 API on Together AI: Choose between fast responses and deep reasoning modes in one model
- Efficiency breakthrough: Comparable quality to DeepSeek-R1 but significantly faster, making deep reasoning practical for production
- Built-in agent support: Native code and search agent capabilities with specialized tool-calling workflows optimized for production use on Together's optimized infrastructure
- Available now on Together AI: Serverless APIs with enterprise reliability and fine-tuning capabilities
How the Hybrid Model Works
DeepSeek-V3.1 operates in two modes through configurable chat templates. You can select non-thinking mode for tasks requiring fast responses or thinking mode for problems needing step-by-step analysis.
DeepSeek-V3.1's thinking mode achieves comparable answer quality to DeepSeek-R1 while responding significantly faster, making deep reasoning practical for production applications.
The model includes built-in support for code agents and search agents, with specialized formatting optimized for multi-turn tool-calling workflows. You control mode selection through the chat template configuration.
Built on DeepSeek-V3.1-Base through substantial long-context extension training - 630B tokens for 32K context and 209B tokens for 128K context, ensuring robust performance across extended conversations and large codebases.
Real Applications for Hybrid Intelligence
DeepSeek-V3.1's hybrid architecture delivers measurable improvements across established coding and agent tasks:
The hybrid model allows you to choose the appropriate cognitive mode for your task complexity, eliminating the need to route between different specialized models for these workflows.
Performance Across Both Modes
The performance differences show where each mode excels. Non-thinking mode handles routine tasks efficiently, while thinking mode provides substantial improvements on complex problems requiring multi-step reasoning.
Production Deployment on Together AI
The new DeepSeek hybrid model is available both through our DeepSeek-V3.1 serverless API and Dedicated Endpoints.
Together AI's optimizations ensure both thinking and non-thinking modes perform reliably under production workloads. Our NVIDIA GPU infrastructure is specifically tuned for large mixture-of-experts models like DeepSeek-V3.1, with transparent pricing that scales with your usage patterns.
Getting Started
DeepSeek-V3.1 on Together AI provides both responsive interaction and deep reasoning capabilities and can be deployed immediately through Together AI's production APIs:
Use our Python SDK to quickly integrate DeepSeek-V3.1 into your applications:
Start building today:
- Interactive Playground — Test complex workflows before production
- API Documentation — Integration guides and examples
- Lower
Cost20% - faster
training4x - network
compression117x
Q: Should I use the RedPajama-V2 Dataset out of the box?
RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.
Try DeepSeek V3.1
Contact us to discuss enterprise deployments, custom integrations, or volume pricing for Deepseek V3.1
article