This website uses cookies to anonymously analyze website traffic using Google Analytics.

Together AI partners with Snowflake to bring Arctic LLM to Enterprise customers

April 25, 2024


Together AI

We are proud to be a launch partner for Snowflake Arctic, an enterprise grade LLM, now available on Together Inference with best in class performance. Enterprises can now build and deploy scalable production applications with complex workloads in the environment of their choice (Together Cloud, private cloud, or on-premise) with Together Platform and Snowflake Arctic LLM.

Snowflake Arctic includes a number of significant advancements

  • Snowflake Arctic is a state-of-the-art large language model (LLM) designed to be the most open, enterprise-grade LLM on the market.
  • Arctic delivers top-tier intelligence with unparalleled efficiency at scale with its unique Mixture-of-Experts (MoE) architecture.  
  • It is optimized for complex enterprise workloads, topping several industry benchmarks across SQL code generation, instruction following, and more. 

This release features Snowflake Arctic 480 billion parameter MoE model that can support a broad range of use cases, and today we are making this model available for inference through the Together API.

import os
from together import Together

client = Together(api_key=os.environ.get("TOGETHER_API_KEY"))

response =
    messages=[{"role": "user", "content": "Generate a SQL command to find all users in California with over $1000 balance"}],


Get started today

Get started now to build with Snowflake Arctic using Together API: Getting started

For enterprises building production-grade secure applications, we offer the ability to run the Together Platform in VPC and on-premise deployments, as well as the option to deploy dedicated instances on the Together Cloud. Contact our sales team to discuss your enterprise deployments. 

Advantages of open-source models for Enterprises
Open source models are increasingly the preferred choice for enterprise deployments. They are faster, more customizable, and more private.

  • High accuracy on your enterprise-specific tasks through Advanced RAG and fine-tuning.
  • Faster performance, by deploying the right-sized model for your task, and deploying with Together Inference for industry-leading speed.
  • Privacy and flexibility of deployment configurations, including VPC and on-premise deployment.
  • Greater control, including the ability to deeply fine-tune, version control, quantize, and otherwise change how the model is run.

We can’t wait to see what you’ll build!

  • Lower
  • faster
  • network

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

Run Snowflake Arctic for your production traffic

Interested in the ability to run the Together Platform in VPC deployments or as a dedicated endpoint on the Together Cloud?

here →