Releases: Dao-AILab/flash-attention
Releases Β· Dao-AILab/flash-attention
v2.6.3
Bump to v2.6.3
v2.6.2
Bump to v2.6.2
v2.6.1
Bump to v2.6.1
v2.6.0.post1
[CI] Compile with pytorch 2.4.0.dev20240514
v2.6.0
Bump v2.6.0
v2.5.9.post1
Limit to MAX_JOBS=1 with CUDA 12.2
v2.5.9
Bump to 2.5.9
v2.5.8
Bump to v2.5.8
v2.5.7
Bump to v2.5.7
v2.5.6
Bump to v2.5.6