Representation Learning with Weighted Inner Product for Universal Approximation of General Similarities
Representation Learning with Weighted Inner Product for Universal Approximation of General Similarities
Geewook Kim, Akifumi Okuno, Kazuki Fukui, Hidetoshi Shimodaira
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5031-5038.
https://doi.org/10.24963/ijcai.2019/699
We propose weighted inner product similarity (WIPS) for neural network-based graph embedding. In addition to the parameters of neural networks, we optimize the weights of the inner product by allowing positive and negative values. Despite its simplicity, WIPS can approximate arbitrary general similarities including positive definite, conditionally positive definite, and indefinite kernels. WIPS is free from similarity model selection, since it can learn any similarity models such as cosine similarity, negative Poincaré distance and negative Wasserstein distance. Our experiments show that the proposed method can learn high-quality distributed representations of nodes from real datasets, leading to an accurate approximation of similarities as well as high performance in inductive tasks.
Keywords:
Natural Language Processing: Embeddings
Machine Learning: Dimensionality Reduction and Manifold Learning
Machine Learning Applications: Applications of Unsupervised Learning
Machine Learning Applications: Networks
Machine Learning: Unsupervised Learning
Machine Learning: Learning Theory