Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning

Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning

Jiewen Deng, Renhe Jiang, Jiaqi Zhang, Xuan Song

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 2018-2026. https://doi.org/10.24963/ijcai.2024/223

Multi-modality spatio-temporal (MoST) data extends spatio-temporal (ST) data by incorporating multiple modalities, which is prevalent in monitoring systems, encompassing diverse traffic demands and air quality assessments. Despite significant strides in ST modeling in recent years, there remains a need to emphasize harnessing the potential of information from different modalities. Robust MoST forecasting is more challenging because it possesses (i) high-dimensional and complex internal structures and (ii) dynamic heterogeneity caused by temporal, spatial, and modality variations. In this study, we propose a novel MoST learning framework via Self-Supervised Learning, namely MoSSL, which aims to uncover latent patterns from temporal, spatial, and modality perspectives while quantifying dynamic heterogeneity. Experiment results on two real-world MoST datasets verify the superiority of our approach compared with the state-of-the-art baselines. Model implementation is available at https://github.com/beginner-sketch/MoSSL.
Keywords:
Data Mining: DM: Mining spatial and/or temporal data
Knowledge Representation and Reasoning: KRR: Qualitative, geometric, spatial, and temporal reasoning
Machine Learning: ML: Time series and data streams