Graph Contrastive Learning with Reinforcement Augmentation

Graph Contrastive Learning with Reinforcement Augmentation

Ziyang Liu, Chaokun Wang, Cheng Wu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 2225-2233. https://doi.org/10.24963/ijcai.2024/246

Graph contrastive learning (GCL), designing contrastive objectives to learn embeddings from augmented graphs, has become a prevailing method for extracting embeddings from graphs in an unsupervised manner. As an important procedure in GCL, graph data augmentation (GDA) directly affects the model performance on downstream tasks. Currently, the GCL methods typically treat GDA as independent events, neglecting its continuity. In this paper, we regard the GDA in GCL as a Markov decision process and propose a novel graph reinforcement augmentation framework for GCL. Based on this framework, we design a Graph Advantage Actor-Critic (GA2C) model. We conduct extensive experiments to evaluate GA2C on unsupervised learning, transfer learning, and semi-supervised learning. The experimental results demonstrate the performance superiority of GA2C over the state-of-the-art GCL models. Furthermore, we verify that GA2C is more efficient than the other GCL methods with learnable GDA and provide two examples of chemical molecular graphs from ZINC-2M to demonstrate that GA2C generates meaningful augmented views, where the edge weights reflect the importance of chemical bonds in the molecule.
Keywords:
Data Mining: DM: Mining graphs
Machine Learning: ML: Representation learning
Machine Learning: ML: Reinforcement learning
Machine Learning: ML: Self-supervised Learning