Siamese CNN-BiLSTM Architecture for 3D Shape Representation Learning

Siamese CNN-BiLSTM Architecture for 3D Shape Representation Learning

Guoxian Dai, Jin Xie, Yi Fang

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 670-676. https://doi.org/10.24963/ijcai.2018/93

Learning a 3D shape representation from a collection of its rendered 2D images has been extensively studied. However, existing view-based techniques have not yet fully exploited the information among all the views of projections. In this paper, by employing recurrent neural network to efficiently capture features across different views, we propose a siamese CNN-BiLSTM network for 3D shape representation learning. The proposed method minimizes a discriminative loss function to learn a deep nonlinear transformation, mapping 3D shapes from the original space into a nonlinear feature space. In the transformed space, the distance of 3D shapes with the same label is minimized, otherwise the distance is maximized to a large margin. Specifically, the 3D shapes are first projected into a group of 2D images from different views. Then convolutional neural network (CNN) is adopted to extract features from different view images, followed by a bidirectional long short-term memory (LSTM) to aggregate information across different views. Finally, we construct the whole CNN-BiLSTM network into a siamese structure with contrastive loss function. Our proposed method is evaluated on two benchmarks, ModelNet40 and SHREC 2014, demonstrating superiority over the state-of-the-art methods.
Keywords:
Computer Vision: 2D and 3D Computer Vision
Computer Vision: Computer Vision