Latest YouTube Video

Tuesday, November 8, 2016

Truth Serums for Massively Crowdsourced Evaluation Tasks. (arXiv:1507.07045v3 [cs.GT] UPDATED)

A major challenge in crowdsourcing evaluation tasks like labeling objects, grading assignments in online courses, etc., is that of eliciting truthful responses from agents in the absence of verifiability. In this paper, we propose new reward mechanisms for such settings that, unlike many previously studied mechanisms, impose minimal assumptions on the structure and knowledge of the underlying generating model, can account for heterogeneity in the agents' abilities, require no extraneous elicitation from them, and furthermore allow their beliefs to be (almost) arbitrary. These mechanisms have the simple and intuitive structure of an output agreement mechanism: an agent gets a reward if her evaluation matches that of her peer, but unlike the classic output agreement mechanism, this reward is not the same across evaluations, but is inversely proportional to an appropriately defined popularity index of each evaluation. The popularity indices are computed by leveraging the existence of a large number of similar tasks, which is a typical characteristic of these settings. Experiments performed on MTurk workers demonstrate higher efficacy (with a $p$-value of $0.02$) of these mechanisms in inducing truthful behavior compared to the state of the art.



from cs.AI updates on arXiv.org http://arxiv.org/abs/1507.07045
via IFTTT

No comments: