Models / Liquid AI /  / LFM2 24B A2B API

LFM2 24B A2B API

Efficient hybrid model optimized for high-volume multi-agent workflows.

Try Now
new

This model isn’t available on Together’s Serverless API.

Deploy this model on an on-demand Dedicated Endpoint or pick a supported alternative from the Model Library.

LFM2-24B-A2B is a hybrid MoE model with 24B total parameters (2.3B activated per token) optimized as the fast inner-loop model for high-volume multi-agent pipelines. The model features a unique hybrid architecture with 30 double-gated LIV convolution blocks + 10 GQA blocks, delivering cost-effective inference enabling massive agent concurrency on the same infrastructure. With native function calling, web search, and structured outputs, LFM2-24B-A2B serves as the generation backbone in high-throughput RAG pipelines while supporting 9 languages across 32,768 token context on Together AI's production infrastructure.

24B
Total Parameters (2.3B activated)
Cost-effective inference at production scale
30+10
Hybrid Architecture Layers
LIV convolution blocks + GQA blocks
9
Languages Supported
High-volume multilingual workflows

Key Capabilities

  • ✓ Cost-Effective Inference: 24B parameters with only 2.3B active—cheaper inference enabling massive agent concurrency
  • ✓ Fast Inner-Loop Model: Optimized for high-volume multi-agent pipelines with native function calling and structured outputs
  • ✓ Hybrid Architecture: 30 double-gated LIV convolution blocks + 10 GQA blocks for efficient production inference
  • ✓ Production-Ready Infrastructure: 99.9% SLA, 32K context, 9-language support, available on serverless and dedicated infrastructure

LFM2 24B A2B API Usage

Endpoint

curl -X POST "https://api.together.xyz/v1/chat/completions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "messages": [
      {
        "role": "user",
        "content": "What are some fun things to do in New York?"
      }
    ]
}'
curl -X POST "https://api.together.xyz/v1/images/generations" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "prompt": "Draw an anime style version of this image.",
    "width": 1024,
    "height": 768,
    "steps": 28,
    "n": 1,
    "response_format": "url",
    "image_url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"
  }'
curl -X POST https://api.together.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "messages": [{
      "role": "user",
      "content": [
        {"type": "text", "text": "Describe what you see in this image."},
        {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
      ]
    }],
    "max_tokens": 512
  }'
curl -X POST https://api.together.xyz/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "messages": [{
      "role": "user",
      "content": "Given two binary strings `a` and `b`, return their sum as a binary string"
    }]
  }'
curl -X POST https://api.together.xyz/v1/rerank \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "query": "What animals can I find near Peru?",
    "documents": [
      "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
      "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
      "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
      "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
    ],
    "top_n": 2
  }'
curl -X POST https://api.together.xyz/v1/embeddings \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Our solar system orbits the Milky Way galaxy at about 515,000 mph.",
    "model": "LiquidAI/LFM2-24B-A2B"
  }'
curl -X POST https://api.together.xyz/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -d '{
    "model": "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    "prompt": "A horse is a horse",
    "max_tokens": 32,
    "temperature": 0.1,
    "safety_model": "LiquidAI/LFM2-24B-A2B"
  }'
curl --location 'https://api.together.ai/v1/audio/generations' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer $TOGETHER_API_KEY' \
  --output speech.mp3 \
  --data '{
    "input": "Today is a wonderful day to build something people love!",
    "voice": "helpful woman",
    "response_format": "mp3",
    "sample_rate": 44100,
    "stream": false,
    "model": "LiquidAI/LFM2-24B-A2B"
  }'
curl -X POST "https://api.together.xyz/v1/audio/transcriptions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -F "model=LiquidAI/LFM2-24B-A2B" \
  -F "language=en" \
  -F "response_format=json" \
  -F "timestamp_granularities=segment"
curl --request POST \
  --url https://api.together.xyz/v2/videos \
  --header "Authorization: Bearer $TOGETHER_API_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "prompt": "some penguins building a snowman"
  }'
curl --request POST \
  --url https://api.together.xyz/v2/videos \
  --header "Authorization: Bearer $TOGETHER_API_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "LiquidAI/LFM2-24B-A2B",
    "frame_images": [{"input_image": "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg"}]
  }'

from together import Together

client = Together()

response = client.chat.completions.create(
  model="LiquidAI/LFM2-24B-A2B",
  messages=[
    {
      "role": "user",
      "content": "What are some fun things to do in New York?"
    }
  ]
)
print(response.choices[0].message.content)
from together import Together

client = Together()

imageCompletion = client.images.generate(
    model="LiquidAI/LFM2-24B-A2B",
    width=1024,
    height=768,
    steps=28,
    prompt="Draw an anime style version of this image.",
    image_url="https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
)

print(imageCompletion.data[0].url)


from together import Together

client = Together()

response = client.chat.completions.create(
    model="LiquidAI/LFM2-24B-A2B",
    messages=[{
    	"role": "user",
      "content": [
        {"type": "text", "text": "Describe what you see in this image."},
        {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
      ]
    }]
)
print(response.choices[0].message.content)

from together import Together

client = Together()
response = client.chat.completions.create(
  model="LiquidAI/LFM2-24B-A2B",
  messages=[
  	{
	    "role": "user", 
      "content": "Given two binary strings `a` and `b`, return their sum as a binary string"
    }
 ],
)

print(response.choices[0].message.content)

from together import Together

client = Together()

query = "What animals can I find near Peru?"

documents = [
  "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
  "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
  "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations.",
]

response = client.rerank.create(
  model="LiquidAI/LFM2-24B-A2B",
  query=query,
  documents=documents,
  top_n=2
)

for result in response.results:
    print(f"Relevance Score: {result.relevance_score}")

from together import Together

client = Together()

response = client.embeddings.create(
  model = "LiquidAI/LFM2-24B-A2B",
  input = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
)

from together import Together

client = Together()

response = client.completions.create(
  model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
  prompt="A horse is a horse",
  max_tokens=32,
  temperature=0.1,
  safety_model="LiquidAI/LFM2-24B-A2B",
)

print(response.choices[0].text)

from together import Together

client = Together()

speech_file_path = "speech.mp3"

response = client.audio.speech.create(
  model="LiquidAI/LFM2-24B-A2B",
  input="Today is a wonderful day to build something people love!",
  voice="helpful woman",
)
    
response.stream_to_file(speech_file_path)

from together import Together

client = Together()
response = client.audio.transcribe(
    model="LiquidAI/LFM2-24B-A2B",
    language="en",
    response_format="json",
    timestamp_granularities="segment"
)
print(response.text)
from together import Together

client = Together()

# Create a video generation job
job = client.videos.create(
    prompt="A serene sunset over the ocean with gentle waves",
    model="LiquidAI/LFM2-24B-A2B"
)
from together import Together

client = Together()

job = client.videos.create(
    model="LiquidAI/LFM2-24B-A2B",
    frame_images=[
        {
            "input_image": "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg",
        }
    ]
)
import Together from 'together-ai';
const together = new Together();

const completion = await together.chat.completions.create({
  model: 'LiquidAI/LFM2-24B-A2B',
  messages: [
    {
      role: 'user',
      content: 'What are some fun things to do in New York?'
     }
  ],
});

console.log(completion.choices[0].message.content);
import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.images.create({
    model: "LiquidAI/LFM2-24B-A2B",
    width: 1024,
    height: 1024,
    steps: 28,
    prompt: "Draw an anime style version of this image.",
    image_url: "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png",
  });

  console.log(response.data[0].url);
}

main();

import Together from "together-ai";

const together = new Together();
const imageUrl = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png";

async function main() {
  const response = await together.chat.completions.create({
    model: "LiquidAI/LFM2-24B-A2B",
    messages: [{
      role: "user",
      content: [
        { type: "text", text: "Describe what you see in this image." },
        { type: "image_url", image_url: { url: imageUrl } }
      ]
    }]
  });
  
  console.log(response.choices[0]?.message?.content);
}

main();

import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.chat.completions.create({
    model: "LiquidAI/LFM2-24B-A2B",
    messages: [{
      role: "user",
      content: "Given two binary strings `a` and `b`, return their sum as a binary string"
    }]
  });
  
  console.log(response.choices[0]?.message?.content);
}

main();

import Together from "together-ai";

const together = new Together();

const query = "What animals can I find near Peru?";
const documents = [
  "The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China.",
  "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia.",
  "The guanaco is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations."
];

async function main() {
  const response = await together.rerank.create({
    model: "LiquidAI/LFM2-24B-A2B",
    query: query,
    documents: documents,
    top_n: 2
  });
  
  for (const result of response.results) {
    console.log(`Relevance Score: ${result.relevance_score}`);
  }
}

main();


import Together from "together-ai";

const together = new Together();

const response = await client.embeddings.create({
  model: 'LiquidAI/LFM2-24B-A2B',
  input: 'Our solar system orbits the Milky Way galaxy at about 515,000 mph',
});

import Together from "together-ai";

const together = new Together();

async function main() {
  const response = await together.completions.create({
    model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    prompt: "A horse is a horse",
    max_tokens: 32,
    temperature: 0.1,
    safety_model: "LiquidAI/LFM2-24B-A2B"
  });
  
  console.log(response.choices[0]?.text);
}

main();

import Together from 'together-ai';

const together = new Together();

async function generateAudio() {
   const res = await together.audio.create({
    input: 'Today is a wonderful day to build something people love!',
    voice: 'helpful woman',
    response_format: 'mp3',
    sample_rate: 44100,
    stream: false,
    model: 'LiquidAI/LFM2-24B-A2B',
  });

  if (res.body) {
    console.log(res.body);
    const nodeStream = Readable.from(res.body as ReadableStream);
    const fileStream = createWriteStream('./speech.mp3');

    nodeStream.pipe(fileStream);
  }
}

generateAudio();

import Together from "together-ai";

const together = new Together();

const response = await together.audio.transcriptions.create(
  model: "LiquidAI/LFM2-24B-A2B",
  language: "en",
  response_format: "json",
  timestamp_granularities: "segment"
});
console.log(response)
import Together from "together-ai";

const together = new Together();

async function main() {
  // Create a video generation job
  const job = await together.videos.create({
    prompt: "A serene sunset over the ocean with gentle waves",
    model: "LiquidAI/LFM2-24B-A2B"
  });
import Together from "together-ai";

const together = new Together();

const job = await together.videos.create({
  model: "LiquidAI/LFM2-24B-A2B",
  frame_images: [
    {
      input_image: "https://cdn.pixabay.com/photo/2020/05/20/08/27/cat-5195431_1280.jpg",
    }
  ]
});

How to use LFM2 24B A2B

Model details

Architecture Overview:
• Hybrid MoE model with 24B total parameters, 2.3B activated per token
• 40-layer architecture: 30 double-gated LIV convolution blocks + 10 GQA blocks
• 64 experts per MoE block with top-4 routing, first 2 layers dense
• Hidden dimension: 2,048 with expert intermediate size: 1,536
• 32,768 token context length for extended workflows
• 65,536 vocabulary size for efficient tokenization
• Minimal active parameters enabling massive agent concurrency
• Designed as fast inner-loop model in multi-step agent pipelines

Training Methodology:
• Trained on 17T tokens (pre-training ongoing)
• General-purpose instruct model without reasoning traces
• Optimized for fast inference in high-volume multi-agent systems
• 9-language support: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, Portuguese

Performance Characteristics:
• Cost-effective efficiency: 24B MoE with only 2.3B active parameters per token
• Native function calling for tool orchestration in agent workflows
• Web search integration for retrieval-augmented generation
• Structured outputs for reliable data extraction and formatting
• Fast inner-loop performance optimized for multi-step pipelines
• High-throughput inference enabling massive concurrent workloads

Prompting LFM2 24B A2B

Applications & Use Cases

High-Volume Multi-Agent Pipelines:
• Optimized as fast inner-loop model for multi-step agent workflows at scale
• Native function calling for tool orchestration and API integration
• Structured outputs for reliable data extraction between agent steps
• Minimal active parameters (2.3B) enabling massive concurrent agent execution
• 32K context supporting extended multi-turn agent conversations
• Cost-effective inference for high-throughput production deployments

High-Throughput RAG Pipelines:
• Generation backbone optimized for production-scale retrieval-augmented setups
• Web search integration for real-time information retrieval
• Structured outputs for consistent formatting of retrieved data
• Efficient tokenization with 65,536 vocabulary size
• Fast inference enabling low-latency, high-volume RAG responses
• Cost-effective scaling for enterprise RAG deployments

Production Agentic Tool Use:
• Native function calling for seamless tool integration at scale
• Web search capabilities for autonomous information gathering
• Structured outputs ensuring reliable tool response parsing
• Fast inner-loop performance for high-throughput agent operations
• Multi-language support (9 languages) for global deployment
• Minimal active parameters reducing inference costs

Cost-Effective Inference at Scale:
• 24B parameters with only 2.3B active—cheaper inference per token
• Run more concurrent agents on same infrastructure
• Hybrid architecture optimized for production efficiency
• Minimal memory footprint via sparse MoE activation
• High-volume deployment without proportional cost increases

Multilingual Production Applications:
• 9-language support: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, Portuguese
• Cross-lingual agent workflows and tool calling
• Multilingual RAG pipelines with consistent performance
• Global deployment with regional language support
• Cost-effective scaling across international markets

Looking for production scale? Deploy on a dedicated endpoint

Deploy LFM2 24B A2B on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.

Get started