Performance as a Constraint: An Improved Wisdom of Crowds Using Performance Regularization
Performance as a Constraint: An Improved Wisdom of Crowds Using Performance Regularization
Jiyi Li, Yasushi Kawase, Yukino Baba, Hisashi Kashima
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 1534-1541.
https://doi.org/10.24963/ijcai.2020/213
Quality assurance is one of the most important problems in crowdsourcing and human computation, and it has been extensively studied from various aspects. Typical approaches for quality assurance include unsupervised approaches such as introducing task redundancy (i.e., asking the same question to multiple workers and aggregating their answers) and supervised approaches such as using worker performance on past tasks or injecting qualification questions into tasks in order to estimate the worker performance. In this paper, we propose to utilize the worker performance as a global constraint for inferring the true answers. The existing semi-supervised approaches do not consider such use of qualification questions. We also propose to utilize the constraint as a regularizer combined with existing statistical aggregation methods. The experiments using heterogeneous multiple-choice questions demonstrate that the performance constraint not only has the power to estimate the ground truths when used by itself, but also boosts the existing aggregation methods when used as a regularizer.
Keywords:
Humans and AI: Human Computation and Crowdsourcing