Models / DeepSeekDeepSeek /  / DeepSeek-V3.2-Exp API

DeepSeek-V3.2-Exp API

Experimental sparse attention model for efficient long-context processing

Try Now
New

This model is not currently supported on Together AI.

Visit our Models page to view all the latest models.

DeepSeek-V3.2-Exp is an experimental model that introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to dramatically improve training and inference efficiency in long-context scenarios. Built on V3.1-Terminus, this model achieves substantial computational efficiency gains while maintaining virtually identical output quality and performance across diverse benchmarks including reasoning, coding, mathematics, and agentic tasks.

DeepSeek-V3.2-Exp API Usage

Endpoint

curl -X POST "https://api.together.xyz/v1/chat/completions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp",
    "messages": [
      {
        "role": "user",
        "content": "What are some fun things to do in New York?"
      }
    ]
}'
curl -X POST "https://api.together.xyz/v1/images/generations" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp",
    "prompt": "Draw an anime style version of this image.",
    "width": 1024,
    "height": 768,
    "steps": 28,
    "n": 1,
    "response_format": "url",
    "image_url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"
  }'
curl -X POST https://api.together.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp",
    "messages": [{
      "role": "user",
      "content": [
        {"type": "text", "text": "Describe what you see in this image."},
        {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
      ]
    }],
    "max_tokens": 512
  }'
curl -X POST https://api.together.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp",
    "messages": [{
      "role": "user",
      "content": "Given two binary strings `a` and `b`, return their sum as a binary string"
    }]
  }'
curl -X POST https://api.together.xyz/v1/rerank \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp",
    "query": "What animals can I find near Peru?",
    "documents": [
      "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
      "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
      "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
      "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
    ],
    "top_n": 2
  }'
curl -X POST https://api.together.xyz/v1/embeddings \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Our solar system orbits the Milky Way galaxy at about 515,000 mph.",
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp"
  }'
curl -X POST https://api.together.xyz/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    "prompt": "A horse is a horse",
    "max_tokens": 32,
    "temperature": 0.1,
    "safety_model": "DeepSeek-AI/DeepSeek-V3-2-Exp"
  }'
curl --location 'https://api.together.ai/v1/audio/generations' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer $TOGETHER_API_KEY' \
  --output speech.mp3 \
  --data '{
    "input": "Today is a wonderful day to build something people love!",
    "voice": "helpful woman",
    "response_format": "mp3",
    "sample_rate": 44100,
    "stream": false,
    "model": "DeepSeek-AI/DeepSeek-V3-2-Exp"
  }'
curl -X POST "https://api.together.xyz/v1/audio/transcriptions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -F "model=DeepSeek-AI/DeepSeek-V3-2-Exp" \
  -F "language=en" \
  -F "response_format=json" \
  -F "timestamp_granularities=segment"
from together import Together

client = Together()

response = client.chat.completions.create(
  model="DeepSeek-AI/DeepSeek-V3-2-Exp",
  messages=[
    {
      "role": "user",
      "content": "What are some fun things to do in New York?"
    }
  ]
)
print(response.choices[0].message.content)
from together import Together

client = Together()

imageCompletion = client.images.generate(
    model="DeepSeek-AI/DeepSeek-V3-2-Exp",
    width=1024,
    height=768,
    steps=28,
    prompt="Draw an anime style version of this image.",
    image_url="https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
)

print(imageCompletion.data[0].url)


from together import Together

client = Together()

response = client.chat.completions.create(
    model="DeepSeek-AI/DeepSeek-V3-2-Exp",
    messages=[{
    	"role": "user",
      "content": [
        {"type": "text", "text": "Describe what you see in this image."},
        {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
      ]
    }]
)
print(response.choices[0].message.content)

from together import Together

client = Together()
response = client.chat.completions.create(
  model="DeepSeek-AI/DeepSeek-V3-2-Exp",
  messages=[
  	{
	    "role": "user", 
      "content": "Given two binary strings `a` and `b`, return their sum as a binary string"
    }
 ],
)

print(response.choices[0].message.content)

from together import Together

client = Together()

query = "What animals can I find near Peru?"

documents = [
  "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
  "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
  "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations.",
]

response = client.rerank.create(
  model="DeepSeek-AI/DeepSeek-V3-2-Exp",
  query=query,
  documents=documents,
  top_n=2
)

for result in response.results:
    print(f"Relevance Score: {result.relevance_score}")

from together import Together

client = Together()

response = client.embeddings.create(
  model = "DeepSeek-AI/DeepSeek-V3-2-Exp",
  input = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
)

from together import Together

client = Together()

response = client.completions.create(
  model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
  prompt="A horse is a horse",
  max_tokens=32,
  temperature=0.1,
  safety_model="DeepSeek-AI/DeepSeek-V3-2-Exp",
)

print(response.choices[0].text)

from together import Together

client = Together()

speech_file_path = "speech.mp3"

response = client.audio.speech.create(
  model="DeepSeek-AI/DeepSeek-V3-2-Exp",
  input="Today is a wonderful day to build something people love!",
  voice="helpful woman",
)
    
response.stream_to_file(speech_file_path)

from together import Together

client = Together()
response = client.audio.transcribe(
    model="DeepSeek-AI/DeepSeek-V3-2-Exp",
    language="en",
    response_format="json",
    timestamp_granularities="segment"
)
print(response.text)
import Together from 'together-ai';
const together = new Together();

const completion = await together.chat.completions.create({
  model: 'DeepSeek-AI/DeepSeek-V3-2-Exp',
  messages: [
    {
      role: 'user',
      content: 'What are some fun things to do in New York?'
     }
  ],
});

console.log(completion.choices[0].message.content);
import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.images.create({
    model: "DeepSeek-AI/DeepSeek-V3-2-Exp",
    width: 1024,
    height: 1024,
    steps: 28,
    prompt: "Draw an anime style version of this image.",
    image_url: "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
  });

  console.log(response.data[0].url);
}

main();

import Together from "together-ai";

const together = new Together();
const imageUrl = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png";

async function main() {
  const response = await together.chat.completions.create({
    model: "DeepSeek-AI/DeepSeek-V3-2-Exp",
    messages: [{
      role: "user",
      content: [
        { type: "text", text: "Describe what you see in this image." },
        { type: "image_url", image_url: { url: imageUrl } }
      ]
    }]
  });
  
  console.log(response.choices[0]?.message?.content);
}

main();

import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.chat.completions.create({
    model: "DeepSeek-AI/DeepSeek-V3-2-Exp",
    messages: [{
      role: "user",
      content: "Given two binary strings `a` and `b`, return their sum as a binary string"
    }]
  });
  
  console.log(response.choices[0]?.message?.content);
}

main();

import Together from "together-ai";

const together = new Together();

const query = "What animals can I find near Peru?";
const documents = [
  "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
  "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
  "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
];

async function main() {
  const response = await together.rerank.create({
    model: "DeepSeek-AI/DeepSeek-V3-2-Exp",
    query: query,
    documents: documents,
    top_n: 2
  });
  
  for (const result of response.results) {
    console.log(`Relevance Score: ${result.relevance_score}`);
  }
}

main();


import Together from "together-ai";

const together = new Together();

const response = await client.embeddings.create({
  model: 'DeepSeek-AI/DeepSeek-V3-2-Exp',
  input: 'Our solar system orbits the Milky Way galaxy at about 515,000 mph',
});

import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.completions.create({
    model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    prompt: "A horse is a horse",
    max_tokens: 32,
    temperature: 0.1,
    safety_model: "DeepSeek-AI/DeepSeek-V3-2-Exp"
  });
  
  console.log(response.choices[0]?.text);
}

main();

import Together from 'together-ai';

const together = new Together();

async function generateAudio() {
   const res = await together.audio.create({
    input: 'Today is a wonderful day to build something people love!',
    voice: 'helpful woman',
    response_format: 'mp3',
    sample_rate: 44100,
    stream: false,
    model: 'DeepSeek-AI/DeepSeek-V3-2-Exp',
  });

  if (res.body) {
    console.log(res.body);
    const nodeStream = Readable.from(res.body as ReadableStream);
    const fileStream = createWriteStream('./speech.mp3');

    nodeStream.pipe(fileStream);
  }
}

generateAudio();

import Together from "together-ai";

const together = new Together();

const response = await together.audio.transcriptions.create(
  model: "DeepSeek-AI/DeepSeek-V3-2-Exp",
  language: "en",
  response_format: "json",
  timestamp_granularities: "segment"
});
console.log(response)

How to use DeepSeek-V3.2-Exp

Model details

Architecture Overview:
• 685B total parameters with Mixture-of-Experts (MoE) architecture
• Multi-Latent Attention (MLA) with MQA mode for efficient key-value sharing
• 128K token context window with extended long-context capabilities
• DeepSeek Sparse Attention (DSA) featuring a lightning indexer and fine-grained token selection

Training Methodology:
• Continued pre-training from DeepSeek-V3.1-Terminus base checkpoint
• Two-stage training: dense warm-up (2.1B tokens) followed by sparse training (943.7B tokens)
• Lightning indexer trained with KL-divergence alignment to main attention distribution
• Post-training includes specialist distillation across mathematics, coding, reasoning, and agentic domains
• Group Relative Policy Optimization (GRPO) for reinforcement learning alignment

Performance Benchmarks:
DeepSeek-V3.2-Exp demonstrates performance on par with V3.1-Terminus across comprehensive evaluations:

BenchmarkDeepSeek-V3.1-TerminusDeepSeek-V3.2-Exp
Reasoning Mode (General)
MMLU-Pro85.085.0
GPQA-Diamond80.779.9
Humanity's Last Exam21.719.8
Code
LiveCodeBench74.974.1
Codeforces-Div120462121
Aider-Polyglot76.174.5
Math
AIME 202588.489.3
HMMT 202586.183.6
Agentic Tool Use
BrowseComp38.540.1
BrowseComp-zh45.047.9
SimpleQA96.897.1
SWE Verified68.467.8
SWE-bench Multilingual57.857.9
Terminal-bench36.737.7

Efficiency Characteristics:
• Reduces core attention complexity from O(L²) to O(Lk) where k≪L
• Up to 70% cost reduction for long-context inference at 128K tokens
• Selects 2048 key-value tokens per query token during sparse attention
• Optimized for H800, H200, MI350, and NPU deployments with specialized kernels

Prompting DeepSeek-V3.2-Exp

Applications & Use Cases

Long-Context Processing:
• Extended document analysis and summarization up to 128K tokens
• Multi-document question answering and information synthesis
• Legal document review and contract analysis
• Research paper analysis and literature review automation

Code & Development:
• Software engineering tasks with large codebase context (SWE-bench: 67.8%)
• Multi-file code generation and refactoring (Aider-Polyglot: 74.5%)
• Competitive programming with advanced algorithms (Codeforces: 2121 rating)
• Terminal and command-line task automation

Reasoning & Mathematics:
• Advanced mathematical problem solving (AIME 2025: 89.3%, HMMT 2025: 83.6%)
• Multi-step logical reasoning and proof generation
• Scientific research assistance and hypothesis generation
• STEM education and tutoring applications

Agentic Applications:
• Web search and browsing agents (BrowseComp: 40.1%)
• Automated information gathering and fact-checking (SimpleQA: 97.1%)
• Task automation and workflow orchestration
• Multi-step planning and execution with tool use

Looking for production scale? Deploy on a dedicated endpoint

Deploy DeepSeek-V3.2-Exp on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.

Get started