Models / Vision / NIM Llama 3.2 90B Vision Instruct API
NIM Llama 3.2 90B Vision Instruct API
Vision
NVIDIA NIM for GPU accelerated Llama 3.2 90B Vision Instruct inference through OpenAI compatible APIs.
Deploy this NIM model

To run this model, you first need to deploy it on a Dedicated Endpoint.
NIM Llama 3.2 90B Vision Instruct API Usage
Endpoint
nim/meta/llama-3.2-90b-vision-instruct
RUN INFERENCE
curl -X POST "https://api.together.xyz/v1/chat/completions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "your-dedicated-endpoint-url",
"messages": [
{
"role": "user",
"content": "What are some fun things to do in New York?"
}
]
}'
RUN INFERENCE
from together import Together
client = Together()
response = client.chat.completions.create(
model="your-dedicated-endpoint-url",
messages=[
{
"role": "user",
"content": "What are some fun things to do in New York?"
}
]
)
print(response.choices[0].message.content)
RUN INFERENCE
import Together from "together-ai";
const together = new Together();
const response = await together.chat.completions.create({
messages: [
{
role: "user",
content: "What are some fun things to do in New York?"
}
],
model: "your-dedicated-endpoint-url"
});
console.log(response.choices[0].message.content)