Company

Introducing the Together Embeddings endpoint — Higher accuracy, longer context, and lower cost

January 11, 2024

By 

Together AI

Today, we are excited to release the Together Embeddings endpoint! Some of the highlights are:

  • 8 leading embedding models – including models that outperform OpenAI’s ada-002 and Cohere’s Embed-v3 in MTEB and LoCo Benchmarks
  • State-of-the-art long context M2-Retrieval models up to 32k context length
  • Up to 4x cheaper than other popular platforms
  • Integrations with MongoDB, LangChain, and LlamaIndex for RAG
  • A fully OpenAI compatible API to make migrating easy

Text embeddings provide a measure of the similarity or relatedness of given text queries. Therefore, embeddings are used for various applications including clustering, semantic search, and classification. Additionally, there has been an increasing interest in retrieval augmented generation, or RAG, which aims to overcome limitations of generative AI models, such as a lack of knowledge from unseen data, by finding relevant information from a given knowledge base through embeddings and providing the information to a generative model. 

Together Embeddings endpoint is available today with 8 open source embeddings models, including top models from the MTEB leaderboard (Massive Text Embedding Benchmark), such as UAE-Large-v1 and BGE models, and the newly released M2-BERT retrieval models for long context (2k, 8k, 32k). See the full list of embeddings models here.

These models achieve state-of-the-art performance in MTEB showing comparable or even better accuracies than closed models. Additionally, M2-BERT retrieval models significantly outperform other closed models in long context retrieval tasks. This means you can now generate embeddings for long documents without splitting them into many short chunks while containing more meaningful contexts in the embeddings. You can also access these powerful models at very competitive prices (up to 4x cheaper) as seen in the pricing graph below. 

Similar to our Inference endpoint, we will continue to add top open source embeddings models to our platform, so that our developers can continue to build successful AI applications! Request new embeddings models by filling out this form.

OpenAI API Compatibility

The Together Embeddings endpoint has full OpenAI compatibility so if you have already built your applications following the OpenAI API you can easily switch it out:

from openai import OpenAI
client = OpenAI(api_key=TOGETHER_API_KEY, base_url="https://api.together.xyz/v1")

def get_embedding(text, model="togethercomputer/m2-bert-80M-32k-retrieval"):
   text = text.replace("\n", " ")
   return client.embeddings.create(input = [text], model=model).data[0].embedding

Building RAG with Together Embeddings

One of the most popular use cases of embeddings models is building a Retrieval Augmented Generation (RAG) system. You can now build RAG using Together API and popular vector databases such as MongoDB, Pinecone, and Chroma or by leveraging frameworks such as LangChain and LlamaIndex. Check out the following blogs where we give you a deep-dive tutorial on how to build RAG using Together with MongoDB, using Together with LangChain, or using Together with LlamaIndex. You can learn more in our documentation.

Data Visualization Example

Embeddings can be used in various applications. In this section, we will take a look at how to use the Together Embeddings endpoint to visualize sample data from various domains of RedPajama-v1 Sample. The following code snippet shows an example of how to use one of the listed models, M2-BERT-Retrieval-8K (API string: togethercomputer/m2-bert-80M-8k-retrieval), for data visualization.

First, define functions to extract samples from a data file and generate embeddings using the Together API. 

import json, os

import numpy as np
import together

together.api_key = 

def generate_embeddings(input_texts: List[str], model_api_string: str) -> List[List[float]]:
    """Generate embeddings from Together API.

    Args:
        input_texts: a list of string input texts.
        model_api_string: str. An API string for a specific embedding model of your choice.

    Returns:
        embeddings_list: a list of embeddings. Each element corresponds to the each input text.
    """
    client = together.Together()
    outputs = client.embeddings.create(
        input=input_texts, 
        model=model_api_string,
    )
    return [x.embedding for x in outputs.data]

def sample_and_extract_embeddings(data_path: str, model_api_string: str, num_samples: int = -1, context_length=512) -> np.ndarray:
    """Sample data examples and extract embeddings for each example.

    Args:
        data_path: str. A path to the data file. It should be in the .jsonl format.
        model_api_string: str. An API string for a specific embedding model of your choice.
        num_samples: int. The number of data examples to sample.
        context_length: int. The max context length of the model (model_api_string).

    Returns:
        embeddings: np.ndarray with num_samples by m where m is the embedding dimension.

    """
    max_num_chars = context_length * 4 # Assuming that each token is about 4 characters.
    embeddings = []
    count = 0
    print(f"Reading from {data_path}")
    with open(data_path, 'r') as f:
        for line in f:
            try:
                ex = json.loads(line)
                total_chars = len(ex['text'])
                if total_chars < max_num_chars:
                    text_ls = [ex['text']]
                else:
                    text_ls = [ex['text'][i : i + max_num_chars] for i in range(0, total_chars, max_num_chars)]                
                embeddings.extend(generate_embeddings_batch(text_ls[:min(len(text_ls), num_samples-count)], model_api_string))
                count += min(len(text_ls), num_samples-count)
            except Exception as e:
                print(f'Error occurred while loading the JSON file of {data_path} with the error message {e}.')
            if count >= num_samples:
                break
    return np.array(embeddings)

The dimension of embedding vectors is usually too large to easily visualize. For M2-BERT-Retrieval-8K, the embedding dimension is 768. To plot each data example in 2D, we will use tSNE to transform the embedding vectors.

import os
import argparse
from typing import List

import matplotlib
from matplotlib import pyplot as plt
from sklearn.manifold import TSNE
import numpy as np

def transform_tSNE(arr: np.ndarray, n_components: int, perplexity: int) -> np.ndarray:
    """Transform the given ndarray using a tSNE model.

    Args:
        arr: np.ndarray. In this example, an embedding matrix (n by m), where n is the number of examples and m equals to the embedding dimension.
        n_components: int. The number of components for tSNE.
        perplexity: int. Perplexity for tSNE.

    Returns:
        vis_dims: np.ndarray. A transformed matrix of n by n_components.
    """
    tsne = TSNE(n_components=n_components, perplexity=perplexity, random_state=42, init='random')
    vis_dims = tsne.fit_transform(arr)
    return vis_dims

def plot_embeddings(embed_arr_ls: List[np.ndarray], n_components: int, names: List[str], perplexity: int):
    """Plot transformed embedding vectors with predefined labels.

    Args:
        embed_arr_ls: a list of np.ndarray. Each np.ndarray is a matrix with embeddings corresponding to data examples.
        n_components: int. The number of components for tSNE.
        names: a list of str. The names of the data sources. The length of this list should be the same as the length of embed_arr_ls.

    Returns:
        None
    """
    vis_dims = transform_tSNE(np.concatenate(embed_arr_ls), n_components, perplexity)

    colors = ["red", "blue", "green", "orange", "purple"]
    colormap = matplotlib.colors.ListedColormap(colors)
    color_indices = []
    for label in range(len(embed_arr_ls)):
        color_indices += [label] * len(embed_arr_ls[label])
    assert (len(vis_dims) == len(color_indices))
    
    x = [x for x,y in vis_dims]
    y = [y for x,y in vis_dims]

    fig, ax = plt.subplots()
    ax.scatter(x, y, c=color_indices, cmap=colormap, alpha=0.3)
    
    for label in range(len(embed_arr_ls)):
        color = colors[label]
        label_indices = [i for i, value in enumerate(color_indices) if value == label]
        avg_x = np.array(x)[label_indices].mean()
        avg_y = np.array(y)[label_indices].mean()
        ax.scatter(avg_x, avg_y, marker='x', color=color, s=100, label=names[label])

    ax.legend()
    plt.title("RedPajama-v1 sample data visualized in language using t-SNE")
    plt.show()

In the main script, run the following to display the plot:

# Define values.
data_dir =  # where the sample dataset is downloaded.
model = "togethercomputer/m2-bert-80M-8k-retrieval" # You can use a different model.
context_length = 8192
num_samples = 50
filenames = ["arxiv_sample.jsonl", "book_sample.jsonl", "c4_sample.jsonl", "github_sample.jsonl", "stackexchange_sample.jsonl"]

embedding_ls = []
for f_name in filenames:
    embed_arr = sample_and_extract_embeddings(
    	os.path.join(data_dir, f_name),
model_api_string=model,
num_samples=num_samples,
context_length=context_length
    )
embedding_ls.append(embed_arr)
plot_embeddings(embedding_ls, n_components=2, names=filenames, perplextiy=15)

As expected, texts from each data source are somewhat closely located. We can also see the similarities between github and stackexchange, and between book and c4, while github and book are less related.

Get Started with Together API

Data visualization is just one of many use cases where embeddings can be used. Explore what you can do using embeddings by accessing all the models on our platform through the Together API. For more information, see our documentation.

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →