Interested in running DeepSeek-V3.1 in production?

Request access to Together Dedicated Endpoints—private and fast DeepSeek-V3.1 inference at scale.

  • Fastest inference: Our DeepSeek-V3.1 API runs over 10% faster than any other provider
  • Flexible scaling: Deploy via Together Serverless or dedicated endpoints
  • Hybrid modes: Switch between thinking and non-thinking
  • Secure & reliable: Private, compliant, and built for production

First name*

Last Name*

COMPANY Email*

COMPANY LOCATION*

What peak QUERIES PER SECOND would you like to support?*

Are you interested in nvidia DGX cloud?*

Thank you for reaching out.

We'll get back to you shortly!

Oops! Something went wrong while submitting the form.

DeepSeek-V3.1 on Together AI

Unmatched performance. Cost-effective scaling. Secure infrastructure.

  • Fastest inference engine

    We run DeepSeek-V3.1 over 10% faster than any other provider. Non-thinking mode delivers instant responses on routine tasks. Thinking mode provides complex analysis and multi-step workflows with low-latency performance.

  • Scalable infrastructure

    Whether you're just starting out or scaling to production workloads, choose from Together Serverless APIs for instant scaling or dedicated endpoints for guaranteed performance. Together AI handles the complexity of hybrid model deployment.

  • Security-first approach

    We host all models in our own data centers, with no data sharing back to DeepSeek. Developers retain full control over their data with opt-out privacy settings.

Seamlessly scale your V3.1 deployment

  • Together Serverless API

    We run DeepSeek-V3.1 over 10% faster than any other provider. Non-thinking mode delivers instant responses on routine tasks. Thinking mode provides complex analysis and multi-step workflows with low-latency performance.

    • Instant scalability and generous rate limits
    • Flexible, pay-per-token pricing with no long-term commitments
    • Full opt-out privacy controls

  • Together Dedicated Endpoints

    Whether you're just starting out or scaling to production workloads, choose from Together Serverless APIs for instant scaling or dedicated endpoints for guaranteed performance. Together AI handles the complexity of hybrid model deployment.

    • Low latency from Together Inference stack
    • High-performance GPUs optimized for hybrid models
    • Contract-based pricing for predictable, cost-effective scaling

Powering the next generation of AI applications

Use our API to deploy DeepSeek-V3.1 on the fastest inference stack available with optimal cost efficiency. Servers are available in North America with complete data privacy controls.