Neural PCA for Flow-Based Representation Learning
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3229-3235.
https://doi.org/10.24963/ijcai.2022/448
Of particular interest is to discover useful representations solely from observations in an unsupervised generative manner. However, the question of whether existing normalizing flows provide effective representations for downstream tasks remains mostly unanswered despite their strong ability for sample generation and density estimation. This paper investigates this problem for such a family of generative models that admits exact invertibility. We propose Neural Principal Component Analysis (Neural-PCA) that operates in full dimensionality while capturing principal components in descending order. Without exploiting any label information, the principal components recovered store the most informative elements in their leading dimensions and leave the negligible in the trailing ones, allowing for clear performance improvements of 5%-10% in downstream tasks. Such improvements are empirically found consistent irrespective of the number of latent trailing dimensions dropped. Our work suggests that necessary inductive bias be introduced into generative modeling when representation quality is of interest.
Keywords:
Machine Learning: Representation learning
Computer Vision: Neural generative models, auto encoders, GANs