This website uses cookies to anonymously analyze website traffic using Google Analytics.

NIM Llama 3.3 Nemotron Super 49B v1 API

NVIDIA NIM for high-efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.

Deploy this NIM model

To run this model, you first need to deploy it on a Dedicated Endpoint.

NIM Llama 3.3 Nemotron Super 49B v1 API Usage

Endpoint

RUN INFERENCE

curl -X POST "https://api.together.xyz/v1/chat/completions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "your-dedicated-endpoint-url",
    "messages": [
      {
        "role": "user",
        "content": "What are some fun things to do in New York?"
      }
    ]
  }'

RUN INFERENCE

from together import Together

client = Together()

response = client.chat.completions.create(
    model="your-dedicated-endpoint-url",
    messages=[
      {
        "role": "user",
        "content": "What are some fun things to do in New York?"
      }
    ]
)
print(response.choices[0].message.content)

RUN INFERENCE

import Together from "together-ai";

const together = new Together();

const response = await together.chat.completions.create({
  messages: [
    {
      role: "user",
      content: "What are some fun things to do in New York?"
    }
  ],
  model: "your-dedicated-endpoint-url"
});

console.log(response.choices[0].message.content)

How to use NIM Llama 3.3 Nemotron Super 49B v1

Model details

Prompting NIM Llama 3.3 Nemotron Super 49B v1

Applications & Use Cases

Looking for production scale? Deploy on a dedicated endpoint

Deploy NIM Llama 3.3 Nemotron Super 49B v1 on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.

Get started