Analogy-preserving functions: A way to extend Boolean samples

Analogy-preserving functions: A way to extend Boolean samples

Miguel Couceiro, Nicolas Hug, Henri Prade, Gilles Richard

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 1575-1581. https://doi.org/10.24963/ijcai.2017/218

Training set extension is an important issue in machine learning. Indeed when the examples at hand are in a limited quantity, the performances of standard classifiers may significantly decrease and it can be helpful to build additional examples. In this paper, we consider the use of analogical reasoning, and more particularly of analogical proportions for extending training sets. Here the ground truth labels are considered to be given by a (partially known) function. We examine the conditions that are required for such functions to ensure an error-free extension in a Boolean setting. To this end, we introduce the notion of Analogy Preserving (AP) functions, and we prove that their class is the class of affine Boolean functions. This noteworthy theoretical result is complemented with an empirical investigation of approximate AP functions, which suggests that they remain suitable for training set extension.
Keywords:
Machine Learning: Classification
Machine Learning: Machine Learning
Machine Learning: New Problems