Representation Learning for Scene Graph Completion via Jointly Structural and Visual Embedding
Representation Learning for Scene Graph Completion via Jointly Structural and Visual Embedding
Hai Wan, Yonghao Luo, Bo Peng, Wei-Shi Zheng
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 949-956.
https://doi.org/10.24963/ijcai.2018/132
This paper focuses on scene graph completion which aims at predicting new relations between two entities utilizing existing scene graphs and images. By comparing with the well-known knowledge graph, we first identify that each scene graph is associated with an image and each entity of a visual triple in a scene graph is composed of its entity type with attributes and grounded with a bounding box in its corresponding image. We then propose an end-to-end model named Representation Learning via Jointly Structural and Visual Embedding (RLSV) to take advantages of structural and visual information in scene graphs. In RLSV model, we provide a fully-convolutional module to extract the visual embeddings of a visual triple and apply hierarchical projection to combine the structural and visual embeddings of a visual triple. In experiments, we evaluate our model on two scene graph completion tasks: link prediction and visual triple classification, and further analyze by case studies. Experimental results demonstrate that our model outperforms all baselines in both tasks, which justifies the significance of combining structural and visual information for scene graph completion.
Keywords:
Knowledge Representation and Reasoning: Knowledge Representation Languages
Computer Vision: Structural and Model-Based Approaches, Knowledge Representation and Reasoning