optimizing-attention-flash
Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.
$ Installer
git clone https://github.com/zechenzhangAGI/AI-research-SKILLs /tmp/AI-research-SKILLs && cp -r /tmp/AI-research-SKILLs/10-optimization/flash-attention ~/.claude/skills/AI-research-SKILLs// tip: Run this command in your terminal to install the skill
Repository

zechenzhangAGI
Author
zechenzhangAGI/AI-research-SKILLs/10-optimization/flash-attention
62
Stars
2
Forks
Updated6d ago
Added6d ago