Abstract
Some Effects of a Reduced Relational Vocabulary on the Whodunit Problem
Daniel T. Halstead, Kenneth D. Forbus
A key issue in artificial intelligence lies in finding the amount of input detail needed to do successful learning. Too much detail causes overhead and makes learning prone to over-fitting. Too little detail and it may not be possible to learn anything at all. The issue is particularly relevant when the inputs are relational case descriptions, and a very expressive vocabulary may also lead to inconsistent representations. For example, in the Whodunit Problem, the task is to form hypotheses about the identity of the perpetrator of an event described using relational propositions. The training data consists of arbitrary relational descriptions of many other similar cases. In this paper, we examine the possibility of translating the case descriptions into an alternative vocabulary which has a reduced number of predicates and therefore produces more consistent case descriptions. We compare how the reduced vocabulary affects three different learning algorithms: exemplar-based analogy, prototype-based analogy, and association rule learning. We find that it has a positive effect on some algorithms and a negative effect on others, which gives us insight into all three algorithms and indicates when reduced vocabularies might be appropriate.