Abstract
Strategyproof Classification with Shared Inputs
Strategyproof classification deals with a setting where a decision-maker must classify a set of input points with binary labels, while minimizing the expected error. The labels of the input points are reported by self-interested agents, who might lie in order to obtain a classifier that more closely matches their own labels, thus creating a bias in the data; this motivates the design of truthful mechanisms that discourage false reports. Previous work [Meir et al., 2008] investigated both decision-theoretic and learning-theoretic variations of the setting, but only considered classifiers that belong to a degenerate class.In this paper we assume that the agents are interested in a shared set of input points. We show that this plausible assumption leads to powerful results. In particular, we demonstrate that variations of a truthful random dictator mechanism can guarantee approximately optimal outcomes with respect to any class of classifiers.
Reshef Meir, Ariel D. Procaccia, Jeffrey S. Rosenschein