This website uses cookies to anonymously analyze website traffic using Google Analytics.
Research

Mamba-3B-SlimPJ: State-space models rivaling the best Transformer architecture

December 12, 2023

By 

Tri Dao, Albert Gu

The Mamba architecture, building on a long line of work on state-spaces models (e.g S4) and hardware-efficient algorithms (e.g. FlashAttention), has emerged as a strong contender to Transformers, but with linear scaling in sequence length and fast inference. As part of a collaboration between us and Together AI & Cartesia AI, we are releasing a Mamba model with 3B parameters trained on 600B tokens on the SlimPajama dataset, under the Apache 2 license.

Model code: https://github.com/state-spaces/mamba

Model weights: https://huggingface.co/state-spaces/mamba-2.8b-slimpj

Trained on 600B tokens, Mamba-3B-SlimPJ matches the quality of some of the best 3B Transformers such as BTLM-3B-8K (also trained for 600B tokens) with 17% fewer FLOPs. BTLM-3B-8K uses a strong Transformer architecture with advanced training techniques that even surpasses some of the 7B Transformers. This further validates that Mamba is a promising architecture for building foundation models.

Training details

We trained Mamba-3B-SlimPJ on 600B tokens, with context length 2048, using the same hyperparameters as Mamba-3B on the Pile (300B tokens), except with a longer learning rate decay to accommodate more tokens.

We use the SlimPajama dataset, with the GPT-NeoX tokenizer. The SlimPajama dataset is a cleaned and deduplicated version of RedPajama. This is what we love about open-source AI: different groups building on each other’s work on data and models.

Evaluation

Mamba-3B-SlimPJ matches the quality of very strong Transformers (BTLM-3B-8K), with 17% fewer training FLOPs. Generally more data and compute would yield better models, for example a similar sized StableLM-3B-4E1T trained on 7x more tokens (1T tokens for 4 epochs) still performs better than Mamba-3B-SlimPJ or BTLM-3B-8K.

We evaluate Mamba-3B-SlimPJ on 10 tasks following the procedure in BTLM-3B-8K: BoolQ, PIQA, HellaSwag, WinoGrande, ARC easy, ARC challenge, OpenBookQA, RACE-high, TruthfulQA, and MMLU. All evaluations use zero-shot, except MMLU which uses 5 shots. We report normalized accuracies for PIQA, HellaSwag, ARC-e, ARC-c, OpenBookQA, MMLU, and accuracies for BoolQ, WinoGrande, RACE-high, and TruthfulQA (MC2 score). 

Mamba-3B-SlimPJ BTLMB-3B-8K StableLM-3B-4E1T
Number of params 2.77B 2.65B 2.80B
Number of tokens 604B 627B 4T
Training FLOPs 1.01E22 1.22E22 8.33E22
BoolQ 71.0 70.0 75.5
PIQA 78.1 77.2 79.8
HellaSwag 71.0 69.8 73.9
WinoGrande 65.9 65.8 66.5
ARC-e 68.2 66.9 67.8
ARC-c 41.7 37.6 40.0
OpenBookQA 39.8 40.4 39.6
RACE-high 36.6 39.4 40.6
TruthfulQA 34.3 36.0 37.2
MMLU 26.2 28.1 44.2
Avg accuracy 53.3 53.1 56.5

Looking forward

Transformers such as BTLM-3B-8K can make use of more advanced techniques such as variable length training and maximal update parameterization. We look forward to exploring these techniques to improve Mamba training in the future.

We’ve been very happy to see the excitement around SSMs and architectures beyond Transformers in general, and Mamba in particular. Part of the motivation for this release is to provide a stronger base model for experimentation and understanding, as well as for chat and instruction-tuned models. We believe that Mamba can be a strong candidate for foundation models on diverse applications language, genomics, audio, and video.

Acknowledgement

Thanks to Cerebras for the SlimPajama dataset, and to Cerebras and OpenTensor for BTLM-3B-8K model. We also thank EleutherAI for the Pile dataset and lm-evaluation-harness.

  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Start
building
yours
here →