Models / Mistral AI / / Ministral 3 3B Instruct 2512 API
Ministral 3 3B Instruct 2512 API
Compact 3B multimodal model for cost-sensitive assistants, tools, and lightweight reasoning.

This model is not currently supported on Together AI.
Visit our Models page to view all the latest models.
Introducing Ministral 3 3B Instruct 2512
Ministral 3 3B Instruct is a compact 3B-class multimodal workhorse that combines a 3.4B language backbone with a 0.4B vision encoder. It preserves the 256K token context window and instruction-following behavior of larger Ministral 3 variants while targeting low-latency, cost-sensitive workloads. Ideal for routing, extraction, simple assistants, and high-volume automation pipelines where throughput and price matter more than frontier-level reasoning.
Ministral 3 3B Instruct 2512 API Usage
Endpoint
curl -X POST "https://api.together.xyz/v1/chat/completions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"messages": [
{
"role": "user",
"content": "What are some fun things to do in New York?"
}
]
}'
curl -X POST "https://api.together.xyz/v1/images/generations" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"prompt": "Draw an anime style version of this image.",
"width": 1024,
"height": 768,
"steps": 28,
"n": 1,
"response_format": "url",
"image_url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"
}'
curl -X POST https://api.together.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-d '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "Describe what you see in this image."},
{"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
]
}],
"max_tokens": 512
}'
curl -X POST https://api.together.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-d '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"messages": [{
"role": "user",
"content": "Given two binary strings `a` and `b`, return their sum as a binary string"
}]
}'
curl -X POST https://api.together.xyz/v1/rerank \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-d '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"query": "What animals can I find near Peru?",
"documents": [
"The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
"The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
"The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
"The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
],
"top_n": 2
}'
curl -X POST https://api.together.xyz/v1/embeddings \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "Our solar system orbits the Milky Way galaxy at about 515,000 mph.",
"model": "mistralai/Ministral-3-3B-Instruct-2512"
}'
curl -X POST https://api.together.xyz/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-d '{
"model": "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
"prompt": "A horse is a horse",
"max_tokens": 32,
"temperature": 0.1,
"safety_model": "mistralai/Ministral-3-3B-Instruct-2512"
}'
curl --location 'https://api.together.ai/v1/audio/generations' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $TOGETHER_API_KEY' \
--output speech.mp3 \
--data '{
"input": "Today is a wonderful day to build something people love!",
"voice": "helpful woman",
"response_format": "mp3",
"sample_rate": 44100,
"stream": false,
"model": "mistralai/Ministral-3-3B-Instruct-2512"
}'
curl -X POST "https://api.together.xyz/v1/audio/transcriptions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-F "model=mistralai/Ministral-3-3B-Instruct-2512" \
-F "language=en" \
-F "response_format=json" \
-F "timestamp_granularities=segment"
curl --request POST \
--url https://api.together.xyz/v2/videos \
--header "Authorization: Bearer $TOGETHER_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"prompt": "some penguins building a snowman"
}'
curl --request POST \
--url https://api.together.xyz/v2/videos \
--header "Authorization: Bearer $TOGETHER_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "mistralai/Ministral-3-3B-Instruct-2512",
"frame_images": [{"input_image": "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg"}]
}'
from together import Together
client = Together()
response = client.chat.completions.create(
model="mistralai/Ministral-3-3B-Instruct-2512",
messages=[
{
"role": "user",
"content": "What are some fun things to do in New York?"
}
]
)
print(response.choices[0].message.content)
from together import Together
client = Together()
imageCompletion = client.images.generate(
model="mistralai/Ministral-3-3B-Instruct-2512",
width=1024,
height=768,
steps=28,
prompt="Draw an anime style version of this image.",
image_url="https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
)
print(imageCompletion.data[0].url)
from together import Together
client = Together()
response = client.chat.completions.create(
model="mistralai/Ministral-3-3B-Instruct-2512",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Describe what you see in this image."},
{"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
]
}]
)
print(response.choices[0].message.content)
from together import Together
client = Together()
response = client.chat.completions.create(
model="mistralai/Ministral-3-3B-Instruct-2512",
messages=[
{
"role": "user",
"content": "Given two binary strings `a` and `b`, return their sum as a binary string"
}
],
)
print(response.choices[0].message.content)
from together import Together
client = Together()
query = "What animals can I find near Peru?"
documents = [
"The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
"The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
"The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
"The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations.",
]
response = client.rerank.create(
model="mistralai/Ministral-3-3B-Instruct-2512",
query=query,
documents=documents,
top_n=2
)
for result in response.results:
print(f"Relevance Score: {result.relevance_score}")
from together import Together
client = Together()
response = client.embeddings.create(
model = "mistralai/Ministral-3-3B-Instruct-2512",
input = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
)
from together import Together
client = Together()
response = client.completions.create(
model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
prompt="A horse is a horse",
max_tokens=32,
temperature=0.1,
safety_model="mistralai/Ministral-3-3B-Instruct-2512",
)
print(response.choices[0].text)
from together import Together
client = Together()
speech_file_path = "speech.mp3"
response = client.audio.speech.create(
model="mistralai/Ministral-3-3B-Instruct-2512",
input="Today is a wonderful day to build something people love!",
voice="helpful woman",
)
response.stream_to_file(speech_file_path)
from together import Together
client = Together()
response = client.audio.transcribe(
model="mistralai/Ministral-3-3B-Instruct-2512",
language="en",
response_format="json",
timestamp_granularities="segment"
)
print(response.text)
from together import Together
client = Together()
# Create a video generation job
job = client.videos.create(
prompt="A serene sunset over the ocean with gentle waves",
model="mistralai/Ministral-3-3B-Instruct-2512"
)
from together import Together
client = Together()
job = client.videos.create(
model="mistralai/Ministral-3-3B-Instruct-2512",
frame_images=[
{
"input_image": "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg",
}
]
)
import Together from 'together-ai';
const together = new Together();
const completion = await together.chat.completions.create({
model: 'mistralai/Ministral-3-3B-Instruct-2512',
messages: [
{
role: 'user',
content: 'What are some fun things to do in New York?'
}
],
});
console.log(completion.choices[0].message.content);
import Together from "together-ai";
const together = new Together();
async function main() {
const response = await together.images.create({
model: "mistralai/Ministral-3-3B-Instruct-2512",
width: 1024,
height: 1024,
steps: 28,
prompt: "Draw an anime style version of this image.",
image_url: "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
});
console.log(response.data[0].url);
}
main();
import Together from "together-ai";
const together = new Together();
const imageUrl = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png";
async function main() {
const response = await together.chat.completions.create({
model: "mistralai/Ministral-3-3B-Instruct-2512",
messages: [{
role: "user",
content: [
{ type: "text", text: "Describe what you see in this image." },
{ type: "image_url", image_url: { url: imageUrl } }
]
}]
});
console.log(response.choices[0]?.message?.content);
}
main();
import Together from "together-ai";
const together = new Together();
async function main() {
const response = await together.chat.completions.create({
model: "mistralai/Ministral-3-3B-Instruct-2512",
messages: [{
role: "user",
content: "Given two binary strings `a` and `b`, return their sum as a binary string"
}]
});
console.log(response.choices[0]?.message?.content);
}
main();
import Together from "together-ai";
const together = new Together();
const query = "What animals can I find near Peru?";
const documents = [
"The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
"The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
"The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
"The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
];
async function main() {
const response = await together.rerank.create({
model: "mistralai/Ministral-3-3B-Instruct-2512",
query: query,
documents: documents,
top_n: 2
});
for (const result of response.results) {
console.log(`Relevance Score: ${result.relevance_score}`);
}
}
main();
import Together from "together-ai";
const together = new Together();
const response = await client.embeddings.create({
model: 'mistralai/Ministral-3-3B-Instruct-2512',
input: 'Our solar system orbits the Milky Way galaxy at about 515,000 mph',
});
import Together from "together-ai";
const together = new Together();
async function main() {
const response = await together.completions.create({
model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
prompt: "A horse is a horse",
max_tokens: 32,
temperature: 0.1,
safety_model: "mistralai/Ministral-3-3B-Instruct-2512"
});
console.log(response.choices[0]?.text);
}
main();
import Together from 'together-ai';
const together = new Together();
async function generateAudio() {
const res = await together.audio.create({
input: 'Today is a wonderful day to build something people love!',
voice: 'helpful woman',
response_format: 'mp3',
sample_rate: 44100,
stream: false,
model: 'mistralai/Ministral-3-3B-Instruct-2512',
});
if (res.body) {
console.log(res.body);
const nodeStream = Readable.from(res.body as ReadableStream);
const fileStream = createWriteStream('./speech.mp3');
nodeStream.pipe(fileStream);
}
}
generateAudio();
import Together from "together-ai";
const together = new Together();
const response = await together.audio.transcriptions.create(
model: "mistralai/Ministral-3-3B-Instruct-2512",
language: "en",
response_format: "json",
timestamp_granularities: "segment"
});
console.log(response)
import Together from "together-ai";
const together = new Together();
async function main() {
// Create a video generation job
const job = await together.videos.create({
prompt: "A serene sunset over the ocean with gentle waves",
model: "mistralai/Ministral-3-3B-Instruct-2512"
});
import Together from "together-ai";
const together = new Together();
const job = await together.videos.create({
model: "mistralai/Ministral-3-3B-Instruct-2512",
frame_images: [
{
input_image: "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg",
}
]
});
How to use Ministral 3 3B Instruct 2512
Model details
Architecture overview:
• 3.4B parameter language backbone paired with a 0.4B vision encoder for unified multimodal IO.
• 256K token context window aligned with the rest of the Ministral 3 lineup for consistent long-context behavior.
• Instruction-tuned objective tailored for concise, schema-following outputs suitable for automation and routing.
Training and performance:
• Trained on multilingual and code-heavy corpora to keep quality competitive despite the small parameter budget.
• Emphasis on robustness and stability for narrow, repetitive tasks rather than open-ended frontier reasoning.
• Strong cost-per-token characteristics, making it attractive for high-QPS backends and batch workloads.
Prompting Ministral 3 3B Instruct 2512
Applications & Use Cases
Automation and decisioning:
• High-volume classification, tagging, routing, and triage of tickets, events, or user messages.
• Information extraction from short documents, forms, and logs into structured records.
• Policy enforcement or guardrail-style checks that filter, normalize, or annotate content.
User-facing assistants and utilities:
• Lightweight chatbots embedded in products or workflows where fast responses matter more than deep reasoning.
• Multimodal utilities that inspect screenshots or small images for quick explanation, labeling, or checks.
• Localized, task-specific helpers (FAQ bots, small copilots, inline explainers) that can run at very low cost.
Model Provider:
Mistral AI
Type:
Chat
Variant:
Parameters:
3.8B
Deployment:
Serverless
On-Demand Dedicated
Monthly Reserved
Quantization
FP8
Context length:
256K
Resolution / Duration
Pricing:
Check pricing
Run in playground
Deploy model
Quickstart docs
Quickstart docs
On-Demand Dedicated
Looking for production scale? Deploy on a dedicated endpoint
Deploy Ministral 3 3B Instruct 2512 on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.
