Sifting Common Information from Many Variables

Sifting Common Information from Many Variables

Greg Ver Steeg, Shuyang Gao, Kyle Reing, Aram Galstyan

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2885-2892. https://doi.org/10.24963/ijcai.2017/402

Measuring the relationship between any pair of variables is a rich and active area of research that is central to scientific practice. In contrast, characterizing the common information among any group of variables is typically a theoretical exercise with few practical methods for high-dimensional data. A promising solution would be a multivariate generalization of the famous Wyner common information, but this approach relies on solving an apparently intractable optimization problem. We leverage the recently introduced information sieve decomposition to formulate an incremental version of the common information problem that admits a simple fixed point solution, fast convergence, and complexity that is linear in the number of variables. This scalable approach allows us to demonstrate the usefulness of common information in high-dimensional learning problems.The sieve outperforms standard methods on dimensionality reduction tasks, solves a blind source separation problem that cannot be solved with ICA, and accurately recovers structure in brain imaging data.
Keywords:
Machine Learning: Unsupervised Learning
Uncertainty in AI: Uncertainty in AI