What Makes Models Compositional? A Theoretical View
What Makes Models Compositional? A Theoretical View
Parikshit Ram, Tim Klinger, Alexander G. Gray
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4824-4832.
https://doi.org/10.24963/ijcai.2024/533
Compositionality is thought to be a key component of language, and various compositional benchmarks have been developed to empirically probe the compositional generalization of existing sequence processing models. These benchmarks often highlight failures of existing models, but it is not clear why these models fail in this way. In this paper, we seek to theoretically understand the role the compositional structure of the models plays in these failures and how this structure relates to their expressivity and sample complexity. We propose a general neuro-symbolic definition of compositional functions and their compositional complexity. We then show how various existing general and special purpose sequence processing models (such as recurrent, convolution and attention-based ones) fit this definition and use it to analyze their compositional complexity. Finally, we provide theoretical guarantees for the expressivity and systematic generalization of compositional models that explicitly depend on our proposed definition and highlighting factors which drive poor empirical performance.
Keywords:
Machine Learning: ML: Neuro-symbolic methods
Machine Learning: ML: Learning theory
Machine Learning: ML: Theory of deep learning
Natural Language Processing: NLP: Interpretability and analysis of models for NLP