Abstract
Improving Topic Evaluation Using Conceptual Knowledge
Claudiu Cristian Musat, Julien Velcin, Stefan Trausan-Matu, Marian-Andrei Rizoiu
The growing number of statistical topic models led to the need to better evaluate their output. Traditional evaluation means estimate the model’s fitness to unseen data. It has recently been proven than the output of human judgment can greatly differ from these measures. Thus the need for methods that better emulate human judgment is stringent. In this paper we present a system that computes the usefulness of individual topics from a given model on the basis of information drawn from a given ontology, in this case WordNet. The notion of utility is regarded as the ability to attribute a concept to each topic and separate words related to the topic from the unrelated ones based on that concept. In multiple experiments we prove the correlation between the automatic evaluation method and the answers received from human evaluators, for various corpora and difficulty levels. By changing the evaluation focus from a statistical one to a conceptual one we were able to detect which topics are conceptually meaningful and rank them accordingly.