Efficient Tuning and Inference for Large Language Models on Textual Graphs

Efficient Tuning and Inference for Large Language Models on Textual Graphs

Yun Zhu, Yaoke Wang, Haizhou Shi, Siliang Tang

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5734-5742. https://doi.org/10.24963/ijcai.2024/634

Rich textual and topological information of textual graphs need to be modeled in real-world applications such as webpages, e-commerce, and academic articles. Practitioners have been long following the path of adopting a shallow text encoder and a subsequent graph neural network (GNN) to solve this problem. In light of recent advancements in large language models (LLMs), it is apparent that integrating LLMs for enhanced textual encoding can substantially improve the performance of textual graphs. Nevertheless, the efficiency of these methods poses a significant challenge. In this paper, we propose ENGINE, a parameter- and memory-efficient fine-tuning method for textual graphs with an LLM encoder. The key insight is to combine the LLMs and GNNs through a tunable side structure, which significantly reduces the training complexity without impairing the joint model's capacity. Extensive experiments on textual graphs demonstrate our method's effectiveness by achieving the best model performance, meanwhile having the lowest training cost compared to previous methods. Moreover, we introduce two variants with caching and dynamic early exit to further enhance training and inference speed. Specifically, caching accelerates ENGINE's training by 12x, and dynamic early exit achieves up to 5x faster inference with a negligible performance drop (at maximum 1.17% relevant drop across 7 datasets). Our codes are available at: https://github.com/ZhuYun97/ENGINE.
Keywords:
Machine Learning: ML: Sequence and graph learning
Data Mining: DM: Mining graphs
Natural Language Processing: NLP: Language models
Machine Learning: ML: Supervised Learning