Dynamic Brightness Adaptation for Robust Multi-modal Image Fusion

Dynamic Brightness Adaptation for Robust Multi-modal Image Fusion

Yiming Sun, Bing Cao, Pengfei Zhu, Qinghua Hu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 1317-1325. https://doi.org/10.24963/ijcai.2024/146

Infrared and visible image fusion aim to integrate modality strengths for visually enhanced, informative images. Visible imaging in real-world scenarios is susceptible to dynamic environmental brightness fluctuations, leading to texture degradation. Existing fusion methods lack robustness against such brightness perturbations, significantly compromising the visual fidelity of the fused imagery. To address this challenge, we propose the Brightness Adaptive multimodal dynamic fusion framework (BA-Fusion), which achieves robust image fusion despite dynamic brightness fluctuations. Specifically, we introduce a Brightness Adaptive Gate (BAG) module, which is designed to dynamically select features from brightness-related channels for normalization, while preserving brightness-independent structural information within the source images. Furthermore, we propose a brightness consistency loss function to optimize the BAG module. The entire framework is tuned via alternating training strategies. Extensive experiments validate that our method surpasses state-of-the-art methods in preserving multi-modal image information and visual fidelity, while exhibiting remarkable robustness across varying brightness levels. Our code is available: https://github.com/SunYM2020/BA-Fusion.
Keywords:
Computer Vision: CV: Applications
Computer Vision: CV: Multimodal learning