Models / BAAI
Embeddings

BGE-Base-EN v1.5

This model maps any text to a low-dimensional dense vector using FlagEmbedding.

About model

BGE-Base-EN v1.5 generates English text embeddings using a BERT-based encoder architecture, achieving strong retrieval performance at a compact 109M parameter size. It enhances retrieval without requiring query instructions and is widely used for its balance of embedding quality and efficiency. Suitable for developers building search and RAG applications.

  • API usage

    • cURL
    • Python
    • Typescript

    Endpoint:

    BAAI/bge-base-en-v1.5-vllm

    curl -X POST https://api.together.xyz/v1/embeddings \
      -H "Authorization: Bearer $TOGETHER_API_KEY" \
      -H "Content-Type: application/json" \
      -d '{
        "input": "Our solar system orbits the Milky Way galaxy at about 515,000 mph.",
        "model": "BAAI/bge-base-en-v1.5-vllm"
      }'
    
    from together import Together
    
    client = Together()
    
    response = client.embeddings.create(
      model = "BAAI/bge-base-en-v1.5-vllm",
      input = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
    )
    
    
    import Together from "together-ai";
    
    const together = new Together();
    
    const response = await client.embeddings.create({
      model: 'BAAI/bge-base-en-v1.5-vllm',
      input: 'Our solar system orbits the Milky Way galaxy at about 515,000 mph',
    });
    
    
Related models
  • Model provider
    BAAI
  • Type
    Embeddings
  • Main use cases
    Embeddings
  • Deployment
    Serverless
    Monthly Reserved
  • Parameters
    109M
  • Context length
    512
  • Input price

    $0.01 / 1M tokens

  • Input modalities
    Text
  • Output modalities
    Structured Data