Image-embodied Knowledge Representation Learning

Image-embodied Knowledge Representation Learning

Ruobing Xie, Zhiyuan Liu, Huanbo Luan, Maosong Sun

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3140-3146. https://doi.org/10.24963/ijcai.2017/438

Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.
Keywords:
Machine Learning: Data Mining
Natural Language Processing: Information Extraction
Natural Language Processing: Natural Language Semantics