Boosting Diffusion Models with an Adaptive Momentum Sampler

Boosting Diffusion Models with an Adaptive Momentum Sampler

Xiyu Wang, Anh-Dung Dinh, Daochang Liu, Chang Xu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 1416-1424. https://doi.org/10.24963/ijcai.2024/157

Diffusion probabilistic models (DPMs) have been shown to generate high-quality images without the need for delicate adversarial training. The sampling process of DPMs is mathematically similar to Stochastic Gradient Descent (SGD), with both being iteratively updated with a function increment. Building on this, we present a novel reverse sampler for DPMs in this paper, drawing inspiration from the widely-used Adam optimizer. Our proposed sampler can be readily applied to a pre-trained diffusion model, utilizing momentum mechanisms and adaptive updating to enhance the generated image's quality. By effectively reusing update directions from early steps, our proposed sampler achieves a better balance between high-level semantics and low-level details. Additionally, this sampler is flexible and can be easily integrated into pre-trained DPMs regardless of the sampler used during training. Our experimental results on multiple benchmarks demonstrate that our proposed reverse sampler yields remarkable improvements over different baselines.
Keywords:
Computer Vision: CV: Image and video synthesis and generation