Training Group Orthogonal Neural Networks with Privileged Information
Training Group Orthogonal Neural Networks with Privileged Information
Yunpeng Chen, Xiaojie Jin, Jiashi Feng, Shuicheng Yan
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 1532-1538.
https://doi.org/10.24963/ijcai.2017/212
Learning rich and diverse representations is critical for the performance of deep convolutional neural networks (CNNs). In this paper, we consider how to use privileged information to promote inherent diversity of a single CNN model such that the model can learn better representations and offer stronger generalization ability. To this end, we propose a novel group orthogonal convolutional neural network (GoCNN) that learns untangled representations within each layer by exploiting provided privileged information and enhances representation diversity effectively. We take image classification as an example where image segmentation annotations are used as privileged information during the training process. Experiments on two benchmark datasets – ImageNet and PASCAL VOC – clearly demonstrate the strong generalization ability of our proposed GoCNN model. On the ImageNet dataset, GoCNN improves the performance of state-of-the-art ResNet-152 model by absolute value of 1.2% while only uses privileged information of 10% of the training images, confirming effectiveness of GoCNN on utilizing available privileged knowledge to train better CNNs.
Keywords:
Machine Learning: Deep Learning
Robotics and Vision: Vision and Perception