Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide high-quality data are rewarded while those that do not are discouraged by low rewards.
We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the game-theoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.
Table of Contents
Chapter 1. Introduction
Chapter 2. Mechanisms for Verifiable Information
Chapter 3. Parametric Mechanisms for Unverifiable Information
Chapter 4. Nonparametric Mechanisms: Multiple Reports
Chapter 5. Nonparametric Mechanisms: Multiple Tasks
Chapter 6. Prediction Markets: Combining Elicitation and Aggregation
Chapter 7. Agents Motivated by Influence
Chapter 8. Decentralized Machine Learning
Chapter 9. Conclusions