← Previous · All Episodes · Next →
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training Episode 778

SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training

· 21:11

|

🤗 Upvotes: 48 | cs.LG, cs.AI, cs.AR, cs.CV, cs.PF

Authors:
Jintao Zhang, Jia Wei, Pengle Zhang, Xiaoming Xu, Haofeng Huang, Haoxu Wang, Kai Jiang, Jun Zhu, Jianfei Chen

Title:
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training

Arxiv:
http://arxiv.org/abs/2505.11594v1

Abstract:
The efficiency of attention is important due to its quadratic time complexity. We enhance the efficiency of attention through two key contributions: First, we leverage the new FP4 Tensor Cores in Blackwell GPUs to accelerate attention computation. Our implementation achieves 1038 TOPS on RTX5090, which is a 5x speedup over the fastest FlashAttention on RTX5090. Experiments show that our FP4 attention can accelerate inference of various models in a plug-and-play way. Second, we pioneer low-bit attention to training tasks. Existing low-bit attention works like FlashAttention3 and SageAttention focus only on inference. However, the efficiency of training large models is also important. To explore whether low-bit attention can be effectively applied to training tasks, we design an accurate and efficient 8-bit attention for both forward and backward propagation. Experiments indicate that 8-bit attention achieves lossless performance in fine-tuning tasks but exhibits slower convergence in pretraining tasks. The code will be available at https://github.com/thu-ml/SageAttention.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →