LLM-based Multi-Level Knowledge Generation for Few-shot Knowledge Graph Completion

LLM-based Multi-Level Knowledge Generation for Few-shot Knowledge Graph Completion

Qian Li, Zhuo Chen, Cheng Ji, Shiqi Jiang, Jianxin Li

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 2135-2143. https://doi.org/10.24963/ijcai.2024/236

Knowledge Graphs (KGs) are pivotal in various NLP applications but often grapple with incompleteness, especially due to the long-tail problem where infrequent, unpopular relationships drastically reduce the KG completion performance. In this paper, we focus on Few-shot Knowledge Graph Completion (FKGC), a task addressing these gaps in long-tail scenarios. Amidst the rapid evolution of Large Language Models, we propose a generation-based FKGC paradigm facilitated by LLM distillation. Our MuKDC framework employs multi-level knowledge distillation for few-shot KG completion, generating supplementary knowledge to mitigate data scarcity in few-shot environments. MuKDC comprises two primary components: Multi-level Knowledge Generation, which enriches the KG at various levels, and Consistency Assessment, to ensure the coherence and reliability of the generated knowledge. Most notably, our method achieves SOTA results in both FKGC and multi-modal FKGC benchmarks, significantly advancing KG completion and enhancing the understanding and application of LLMs in structured knowledge generation and assessment.
Keywords:
Data Mining: DM: Knowledge graphs and knowledge base completion
Natural Language Processing: NLP: Applications