This website uses cookies to anonymously analyze website traffic using Google Analytics.
Company

Our $102.5M Series A

November 29, 2023

By 

VIPUL VED PRAKASH

At Together AI, we believe the future of AI is open source. So we are creating a comprehensive cloud platform to allow developers everywhere to build on open and custom AI models.

Today, I am thrilled to announce that we have raised $102.5M in a Series A financing to help us build this future. The round is led by Kleiner Perkins, with participation from an incredible group of investors, including NVIDIA and Emergence Capital. Bucky Moore from Kleiner Perkins will join the Together AI board as part of this financing. This new capital will allow us to significantly accelerate our goal of creating the fastest cloud platform for generative AI applications.

In addition to our lead investors, the funding round was joined by NEA, Prosperity 7, Greycroft, 137 Ventures, as well as many of our Seed investors, including Lux Capital, Definition Capital, Long Journey Ventures, SCB10x, SV Angel, Factory, and Scott Banister.

Since launch in June 2023, our AI compute, training, and inference products have seen tremendous adoption from startups and enterprises, and we plan to build on this momentum by scaling our services and introducing new products that make it easy to integrate AI into applications. “AI is a new layer of infrastructure that is changing how we develop software. To maximize its impact, it’s important that we make it available to developers everywhere.” said Bucky Moore, partner at Kleiner Perkins, “We expect the vast and growing landscape of open source models to see widespread adoption as their performance approaches that of closed alternatives. Together AI enables any organization to build fast, reliable applications on top of them.”

Today, more than ever, startups and enterprises alike are looking to build a generative AI strategy for their business that is free from lock-in to a single vendor. Open source AI provides a strong foundation for these applications with increasingly powerful generative models being released almost weekly. The Together AI platform allows developers to quickly and easily integrate leading open source models or create their own models through pre-training or fine-tuning. Our customers choose to bring their generative AI workloads to Together AI owing to our industry leading performance and reliability; while still having comfort that they own the result of their investment in AI and are always free to run their model on any platform. “Emergence invested early in the enterprise shift to cloud applications, dating back to 2002 with our early investment in Salesforce.” Joseph Floyd, GP of Emergence Capital said, “We see a similar shift today where enterprises are rapidly investing in generative AI. Together AI is well positioned to be the platform of choice as enterprises look to control their proprietary IP while pushing their generative AI investments from prototype into production.”

Our industry-leading performance and reliability is driven by our focus on core research. We’ve taken a research-driven approach to building AI systems and products. We publish novel research under open-source licenses that widely benefit the AI ecosystem. This year, we released the RedPajama-V2 dataset, the largest open dataset consisting of 30 trillion tokens for training LLMs. RedPajama-V2 has been downloaded 1.2M times in the last month, and speaks to the breadth of interest in core AI development.

Earlier this year, our Chief Scientist, Tri Dao, along with collaborators released FlashAttention v2, which is in use by everyone including OpenAI, Anthropic, Meta, and Mistral to build the leading LLMs. Our novel work on inference, based on techniques like Medusa and Flash-Decoding, has resulted in the fastest inference stack for transformer models. It is available through Together Inference API, enabling quick access to over 100 open models for fast inference. Together’s research lab is also leading the charge on sub-quadratic models, which promise a more compute-efficient approach for longer-context AI models.

Along with research, we place a strong emphasis on compute infrastructure. We operate an AI infrastructure which is growing to 20 exaflops, across multiple data centers in the US and EU. Our cloud infrastructure, which runs NVIDIA GPUs and networking across AI cloud partners like Crusoe Cloud and Vultr, is custom designed for high-performance AI applications. By creating custom infrastructure, we can offer significantly better economics on pre-training and inference workloads. Some of the leading AI startups, like Pika Labs, NexusFlow, Voyage AI, and Cartesia are building a new class of models on Together Cloud.

We believe generative AI is a platform technology, a new operating system for applications, and will have a long-range impact on human society. The AI ecosystem will consist of proprietary models and open models, and it’s incredibly important that this future has choice and options. Our mission is to create a way for any researcher or developer to participate in shaping our AI future.

— Vipul Ved Prakash, Co-founder and CEO

  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

Start
building
yours
here →