Models / ServiceNow AIApriel /  / Apriel-1.6-15B-Thinker API

Apriel-1.6-15B-Thinker API

Frontier-level multimodal reasoning in a compact, token-efficient model

Try Now
Free

This model is not currently supported on Together AI.

Visit our Models page to view all the latest models.

Introducing Apriel-1.6-15B-Thinker

Apriel-1.6-15B-Thinker is an updated multimodal reasoning model from ServiceNow's Apriel SLM series that scores 57 on the AA Intelligence Index, matching models like Qwen-235B-A22B and DeepSeek-v3.2 Exp while being 15x smaller. With 30% better reasoning token efficiency than its predecessor, it delivers frontier-level performance on a single GPU.

57
AA Intelligence Index
Matches Qwen-235B & DeepSeek-v3.2 Exp
88%
AIME 2025
Elite mathematical reasoning
30%
Fewer Tokens
Better reasoning efficiency vs. v1.5
Key Capabilities
Frontier Performance: 57 on AA Index, matching models 15x its size
Token Efficiency: 30% fewer reasoning tokens than Apriel-1.5
Enterprise Ready: 69% Tau2 Bench Telecom, 69% IFBench
Single GPU: 15B parameters fit entirely on one GPU

Apriel-1.6-15B-Thinker API Usage

Endpoint

curl -X POST "https://api.together.xyz/v1/chat/completions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "messages": [
      {
        "role": "user",
        "content": "What are some fun things to do in New York?"
      }
    ]
}'
curl -X POST "https://api.together.xyz/v1/images/generations" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "prompt": "Draw an anime style version of this image.",
    "width": 1024,
    "height": 768,
    "steps": 28,
    "n": 1,
    "response_format": "url",
    "image_url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"
  }'
curl -X POST https://api.together.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "messages": [{
      "role": "user",
      "content": [
        {"type": "text", "text": "Describe what you see in this image."},
        {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
      ]
    }],
    "max_tokens": 512
  }'
curl -X POST https://api.together.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "messages": [{
      "role": "user",
      "content": "Given two binary strings `a` and `b`, return their sum as a binary string"
    }]
  }'
curl -X POST https://api.together.xyz/v1/rerank \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "query": "What animals can I find near Peru?",
    "documents": [
      "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
      "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
      "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
      "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
    ],
    "top_n": 2
  }'
curl -X POST https://api.together.xyz/v1/embeddings \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Our solar system orbits the Milky Way galaxy at about 515,000 mph.",
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker"
  }'
curl -X POST https://api.together.xyz/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    "prompt": "A horse is a horse",
    "max_tokens": 32,
    "temperature": 0.1,
    "safety_model": "ServiceNow-AI/Apriel-1.6-15b-Thinker"
  }'
curl --location 'https://api.together.ai/v1/audio/generations' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer $TOGETHER_API_KEY' \
  --output speech.mp3 \
  --data '{
    "input": "Today is a wonderful day to build something people love!",
    "voice": "helpful woman",
    "response_format": "mp3",
    "sample_rate": 44100,
    "stream": false,
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker"
  }'
curl -X POST "https://api.together.xyz/v1/audio/transcriptions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -F "model=ServiceNow-AI/Apriel-1.6-15b-Thinker" \
  -F "language=en" \
  -F "response_format=json" \
  -F "timestamp_granularities=segment"
curl --request POST \
  --url https://api.together.xyz/v2/videos \
  --header "Authorization: Bearer $TOGETHER_API_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "prompt": "some penguins building a snowman"
  }'
curl --request POST \
  --url https://api.together.xyz/v2/videos \
  --header "Authorization: Bearer $TOGETHER_API_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    "frame_images": [{"input_image": "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg"}]
  }'

from together import Together

client = Together()

response = client.chat.completions.create(
  model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
  messages=[
    {
      "role": "user",
      "content": "What are some fun things to do in New York?"
    }
  ]
)
print(response.choices[0].message.content)
from together import Together

client = Together()

imageCompletion = client.images.generate(
    model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
    width=1024,
    height=768,
    steps=28,
    prompt="Draw an anime style version of this image.",
    image_url="https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
)

print(imageCompletion.data[0].url)


from together import Together

client = Together()

response = client.chat.completions.create(
    model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
    messages=[{
    	"role": "user",
      "content": [
        {"type": "text", "text": "Describe what you see in this image."},
        {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
      ]
    }]
)
print(response.choices[0].message.content)

from together import Together

client = Together()
response = client.chat.completions.create(
  model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
  messages=[
  	{
	    "role": "user", 
      "content": "Given two binary strings `a` and `b`, return their sum as a binary string"
    }
 ],
)

print(response.choices[0].message.content)

from together import Together

client = Together()

query = "What animals can I find near Peru?"

documents = [
  "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
  "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
  "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations.",
]

response = client.rerank.create(
  model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
  query=query,
  documents=documents,
  top_n=2
)

for result in response.results:
    print(f"Relevance Score: {result.relevance_score}")

from together import Together

client = Together()

response = client.embeddings.create(
  model = "ServiceNow-AI/Apriel-1.6-15b-Thinker",
  input = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
)

from together import Together

client = Together()

response = client.completions.create(
  model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
  prompt="A horse is a horse",
  max_tokens=32,
  temperature=0.1,
  safety_model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
)

print(response.choices[0].text)

from together import Together

client = Together()

speech_file_path = "speech.mp3"

response = client.audio.speech.create(
  model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
  input="Today is a wonderful day to build something people love!",
  voice="helpful woman",
)
    
response.stream_to_file(speech_file_path)

from together import Together

client = Together()
response = client.audio.transcribe(
    model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
    language="en",
    response_format="json",
    timestamp_granularities="segment"
)
print(response.text)
from together import Together

client = Together()

# Create a video generation job
job = client.videos.create(
    prompt="A serene sunset over the ocean with gentle waves",
    model="ServiceNow-AI/Apriel-1.6-15b-Thinker"
)
from together import Together

client = Together()

job = client.videos.create(
    model="ServiceNow-AI/Apriel-1.6-15b-Thinker",
    frame_images=[
        {
            "input_image": "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg",
        }
    ]
)
import Together from 'together-ai';
const together = new Together();

const completion = await together.chat.completions.create({
  model: 'ServiceNow-AI/Apriel-1.6-15b-Thinker',
  messages: [
    {
      role: 'user',
      content: 'What are some fun things to do in New York?'
     }
  ],
});

console.log(completion.choices[0].message.content);
import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.images.create({
    model: "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    width: 1024,
    height: 1024,
    steps: 28,
    prompt: "Draw an anime style version of this image.",
    image_url: "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
  });

  console.log(response.data[0].url);
}

main();

import Together from "together-ai";

const together = new Together();
const imageUrl = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png";

async function main() {
  const response = await together.chat.completions.create({
    model: "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    messages: [{
      role: "user",
      content: [
        { type: "text", text: "Describe what you see in this image." },
        { type: "image_url", image_url: { url: imageUrl } }
      ]
    }]
  });
  
  console.log(response.choices[0]?.message?.content);
}

main();

import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.chat.completions.create({
    model: "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    messages: [{
      role: "user",
      content: "Given two binary strings `a` and `b`, return their sum as a binary string"
    }]
  });
  
  console.log(response.choices[0]?.message?.content);
}

main();

import Together from "together-ai";

const together = new Together();

const query = "What animals can I find near Peru?";
const documents = [
  "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
  "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
  "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
];

async function main() {
  const response = await together.rerank.create({
    model: "ServiceNow-AI/Apriel-1.6-15b-Thinker",
    query: query,
    documents: documents,
    top_n: 2
  });
  
  for (const result of response.results) {
    console.log(`Relevance Score: ${result.relevance_score}`);
  }
}

main();


import Together from "together-ai";

const together = new Together();

const response = await client.embeddings.create({
  model: 'ServiceNow-AI/Apriel-1.6-15b-Thinker',
  input: 'Our solar system orbits the Milky Way galaxy at about 515,000 mph',
});

import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.completions.create({
    model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    prompt: "A horse is a horse",
    max_tokens: 32,
    temperature: 0.1,
    safety_model: "ServiceNow-AI/Apriel-1.6-15b-Thinker"
  });
  
  console.log(response.choices[0]?.text);
}

main();

import Together from 'together-ai';

const together = new Together();

async function generateAudio() {
   const res = await together.audio.create({
    input: 'Today is a wonderful day to build something people love!',
    voice: 'helpful woman',
    response_format: 'mp3',
    sample_rate: 44100,
    stream: false,
    model: 'ServiceNow-AI/Apriel-1.6-15b-Thinker',
  });

  if (res.body) {
    console.log(res.body);
    const nodeStream = Readable.from(res.body as ReadableStream);
    const fileStream = createWriteStream('./speech.mp3');

    nodeStream.pipe(fileStream);
  }
}

generateAudio();

import Together from "together-ai";

const together = new Together();

const response = await together.audio.transcriptions.create(
  model: "ServiceNow-AI/Apriel-1.6-15b-Thinker",
  language: "en",
  response_format: "json",
  timestamp_granularities: "segment"
});
console.log(response)
import Together from "together-ai";

const together = new Together();

async function main() {
  // Create a video generation job
  const job = await together.videos.create({
    prompt: "A serene sunset over the ocean with gentle waves",
    model: "ServiceNow-AI/Apriel-1.6-15b-Thinker"
  });
import Together from "together-ai";

const together = new Together();

const job = await together.videos.create({
  model: "ServiceNow-AI/Apriel-1.6-15b-Thinker",
  frame_images: [
    {
      input_image: "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg",
    }
  ]
});

How to use Apriel-1.6-15B-Thinker

Model details

Architecture Overview:
• 15B parameter multimodal model supporting image-text-to-text reasoning with 131K context window for complex tasks.
• Built on continual pre-training across billions of tokens covering math, code, science, logical reasoning, and multimodal image-text data.
• Simplified chat template for easier output parsing with reasoning steps followed by final response delimiter.
• Fits entirely on a single GPU, making it highly memory-efficient for deployment.

Training Methodology:
• Multi-stage training: continual pre-training, supervised fine-tuning (2.4M samples), and reinforcement learning optimization.
• Training data includes ~15% from NVIDIA Nemotron collection for depth up-scaling and diverse domain coverage.
• RL stage specifically optimizes reasoning efficiency by using fewer tokens, stopping earlier when confident, and giving direct answers on simple queries.
• Incremental lightweight multimodal SFT following text-based supervised fine-tuning phase.

Performance Characteristics:
• Elite reasoning: 88% AIME 2025, 73% GPQA Diamond, 81% LiveCodeBench, 79% MMLU Pro.
• Strong instruction following: 69% IFBench, 83.34% Multi IF, 57.2% Agent IF.
• Enterprise-ready: 69% Tau2 Bench Telecom, 66.67% Tau2 Bench Retail, 58% Tau2 Bench Airline.
• Advanced function calling: 63.5% BFCL v3, 33.2% ComplexFuncBench.
• Multimodal excellence: 72% MMMU validation, 60.28% MMMU-PRO, 79.9% MathVista, 86.04% AI2D Test.
• Reduces reasoning token usage by 30%+ compared to Apriel-1.5 while maintaining or improving task performance.

Prompting Apriel-1.6-15B-Thinker

Applications & Use Cases

Multimodal Reasoning:
• Visual question answering and complex image understanding tasks requiring deep reasoning.
• Mathematical problem solving from visual inputs including charts, diagrams, and equations.
• Document analysis combining text and visual elements for comprehensive understanding.

Code & Development:
• Code assistance and generation with logical reasoning and multi-step problem decomposition.
• Technical documentation understanding and creation with visual component support.
• Software development workflows requiring reasoning over code structure and logic.

Enterprise Applications:
• Telecom, retail, and airline domain-specific workflows with strong Tau2 Bench performance.
• Complex instruction following and function calling for business automation.
• Agent-based systems requiring reliable instruction adherence and multi-turn interactions.

Knowledge & Question Answering:
• Information retrieval combining text and visual context for accurate responses.
• Scientific and technical question answering with reasoning transparency.
• Educational applications requiring step-by-step problem solving explanations.

Creative & General Purpose:
• Question answering across diverse domains with multimodal context.
• Logical reasoning tasks requiring systematic analysis and structured thinking.
• Real-world workflows where efficiency and single-GPU deployment are critical constraints.

Looking for production scale? Deploy on a dedicated endpoint

Deploy Apriel-1.6-15B-Thinker on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.

Get started