This repo contains the code associated with Theory, Analysis, and Best Practices for Sigmoid Self-Attention
The three components of this release are:
- FlashSigmoid: A hardware aware implementation of Sigmoid Attention.
- Optorch: PyTorch-based functional implementation of standard optimizers.
- Attention Simulator: A research friendly codebase for diagnosing and debugging attention.
See the README.md
in the corresponding component for installation and usage instructions.
We provide a convenience installation helper for all three packages:
# Create an environment for sigmoid attention, if not done already.
conda create -n sigmoid-attn-py310 python=3.10
conda activate sigmoid-attn-py310
# Setup Flashsigmoid -> Optorch -> Attention Simulator.
bash setup.bash
Forward pass kernels on H100. | Backward pass kernels on H100. |
---|---|
Train losses comparing SigmoidAttn with SoftmaxAttn. |
---|
If you find this work useful in your research, please cite:
@misc{ramapuram2024theoryanalysisbestpractices,
title={Theory, Analysis, and Best Practices for Sigmoid Self-Attention},
author={Jason Ramapuram and Federico Danieli and Eeshan Dhekane and Floris Weers and Dan Busbridge and Pierre Ablin and Tatiana Likhomanenko and Jagrit Digani and Zijin Gu and Amitis Shidani and Russ Webb},
year={2024},
eprint={2409.04431},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2409.04431},
}