Examining Validity of MTurk Workers Responses Based on Monetary Reward
Loading...
Date
2020
Authors
Murphy, Maggie
Murphy, Margret
Condon, David
Journal Title
Journal ISSN
Volume Title
Publisher
University of Oregon
Abstract
Amazon's Mechanical Turk is an online crowdsourcing marketplace (OCM) that has become widely used for data collection in scientific research, especially in the social sciences. In psychology research, a common use of the platform is to pay MTurk workers (aka "MTurkers") to complete surveys and online behavioral tasks. The MTurkers are then paid for their contribution to the survey; however, little research has considered the effect of payment on data quality (Chmielewski & Kucker, 2019). We hypothesize that the accuracy of responses are partially dependent on the amount the MTurk Workers are paid for their responses. In this study, we sought to evaluate the effect of compensation on the care that MTurkers displayed in their responses to the survey. We look to explore the validity of MTurk responses using an SPI norming survey created by Professor Condon, and delineating it by three factors: one that compensated workers at a rate equal to the U.S. federal minimum wage, one paying minimum wage plus 25%, and a third paying 25% less than minimum wage with an unannounced bonus (up to minimum wage) after the work was completed. We compare their responses based on the time spent responding to the survey, inter-item correlations, and evidence of “patterned responding” (e.g., choosing the same response option for several questions in a row). The findings from our research will be beneficial to researchers using MTurk and other OCMs for data collection.
Description
Project files are comprised of 1 page pdf and presentation recording in mp4 format.
Keywords
Amazon's Mechanical Turk, Validity Response, Compensation, Personality