This website uses cookies to anonymously analyze website traffic using Google Analytics.

Models / DeepSeek /  / DeepSeek R1 Distilled Llama 70B API

DeepSeek R1 Distilled Llama 70B API

Llama 70B distilled with reasoning capabilities from Deepseek R1. Surpasses GPT-4o with 94.5% on MATH-500 & matches o1-mini on coding.

Try our DeepSeek R1 Distilled Llama 70 API

DeepSeek R1 Distilled Llama 70B API Usage

Endpoint

RUN INFERENCE

curl -X POST "https://api.together.xyz/v1/chat/completions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
    "messages": [],
    "stream": true
  }'

RUN INFERENCE

from together import Together

client = Together()

response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
    messages=[],
    stream=True
)
for token in response:
    if hasattr(token, 'choices'):
        print(token.choices[0].delta.content, end='', flush=True)

RUN INFERENCE

import Together from "together-ai";

const together = new Together();

const response = await together.chat.completions.create({
  messages: [],
  model: "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
  stream: true
});

for await (const token of response) {
  console.log(token.choices[0]?.delta?.content)
}

How to use DeepSeek R1 Distilled Llama 70B

Model details

Prompting DeepSeek R1 Distilled Llama 70B

Applications & Use Cases

Looking for production scale? Deploy on a dedicated endpoint

Deploy DeepSeek R1 Distilled Llama 70B on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.

Get started