Models / Deep CogitoCogito / / Cogito V1 Preview Qwen 14B API
Cogito V1 Preview Qwen 14B API
Best-in-class open-source LLM trained with IDA for alignment, reasoning, and self-reflective, agentic applications.

To run this model, you first need to deploy it on a Dedicated Endpoint.
Cogito V1 Preview Qwen 14B API Usage
Endpoint
RUN INFERENCE
curl -X POST "https://api.together.xyz/v1/chat/completions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "your-dedicated-endpoint-url",
"messages": [
{
"role": "user",
"content": "What are some fun things to do in New York?"
}
]
}'
RUN INFERENCE
from together import Together
client = Together()
response = client.chat.completions.create(
model="your-dedicated-endpoint-url",
messages=[
{
"role": "user",
"content": "What are some fun things to do in New York?"
}
]
)
print(response.choices[0].message.content)
RUN INFERENCE
import Together from "together-ai";
const together = new Together();
const response = await together.chat.completions.create({
messages: [
{
role: "user",
content: "What are some fun things to do in New York?"
}
],
model: "your-dedicated-endpoint-url"
});
console.log(response.choices[0].message.content)
How to use Cogito V1 Preview Qwen 14B
Model details
The Cogito LLMs are instruction tuned generative models (text in/text out). All models are released under an open license for commercial use.
- Cogito models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models).
- The LLMs are trained using Iterated Distillation and Amplification (IDA) - an scalable and efficient alignment strategy for superintelligence using iterative self-improvement.
- The models have been optimized for coding, STEM, instruction following and general helpfulness, and have significantly higher multilingual, coding and tool calling capabilities than size equivalent counterparts.
- In both standard and reasoning modes, Cogito v1-preview models outperform their size equivalent counterparts on common industry benchmarks.
- Each model is trained in over 30 languages and supports a context length of 128k.
Evaluations
We compare our models against the state of the art size equivalent models in direct mode as well as the reasoning mode. For the direct mode, we compare against Llama / Qwen instruct counterparts. For reasoning, we use Deepseek's R1 distilled counterparts / Qwen's QwQ model.

Livebench Global Average:

For detailed evaluations, please refer to the Blog Post.