ToDo: Token Downsampling for Efficient Generation of High-Resolution Images
ToDo: Token Downsampling for Efficient Generation of High-Resolution Images
Ethan Smith, Nayan Saxena, Aninda Saha
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Demo Track. Pages 8801-8804.
https://doi.org/10.24963/ijcai.2024/1036
Attention has been a crucial component in the success of image diffusion models, however, their quadratic computational complexity limits the sizes of images we can process within reasonable time and memory constraints. This paper investigates the importance of dense attention in generative image models, which often contain redundant features, making them suitable for sparser attention mechanisms. We propose a novel training-free method ToDo that relies on token downsampling of key and value tokens to accelerate Stable Diffusion inference by up to 2x for common sizes and up to 4.5x or more for high resolutions like 2048 × 2048. We demonstrate that our approach outperforms previous methods in balancing efficient throughput and fidelity.
Keywords:
Computer Vision: CV: Neural generative models, auto encoders, GANs
Machine Learning: ML: Attention models