Skip to main content Accessibility help
×
Home

Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples

  • David J. Andersen (a1) and Richard R. Lau (a2)

Abstract

Mechanical Turk has become an important source of subjects for social science experiments, providing a low-cost alternative to the convenience of using undergraduates while avoiding the expense of drawing fully representative samples. However, we know little about how the rates we pay to “Turkers” for participating in social science experiments affects their participation. This study examines subject performance using two experiments – a short survey experiment and a longer dynamic process tracing study of political campaigns – that recruited Turkers at different rates of pay. Looking at demographics and using measures of attention, engagement and evaluation of the candidates, we find no effects of pay rates upon subject recruitment or participation. We conclude by discussing implications and ethical standards of pay.

  • View HTML
    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples
      Available formats
      ×

Copyright

Footnotes

Hide All

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/VCWWGZ.

Footnotes

References

Hide All
Andersen, David. 2018. “Replication Data for: Subject Performance in Social Science Experiments Using Crowdsources Online Samples.” doi:10.7910/DVN/VCWWGZ, Harvard Dataverse, V1, UNF:6:RQAq0OAZinHNkPjUZVcz5A==.
Berinsky, Adam, Huber, Gregory, and Lenz, Gabriel. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk.” Political Analysis 20: 351368.
Berinsky, Adam, Margolis, Michele, and Sances, Michael. 2016. “Can we turn shirkers into workers?.” Journal of Experimental Social Psychology 66: 2028.
Druckman, James N. and Kam, Cindy D.. 2011. “Students As Experimental Participants: A Defense of the ‘Narrow Data Base’.” In Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, James H, and Lupia, Arthur. (pp. 4157). New York: Cambridge University Press.
Druckman, James, Green, Donald Kuklinski, James, and Lupia, Arthur 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100 (4): 627635.
Finnerty, Ailbhe, Kucherbaev, Pavel, Tranquillini, Stefano, and Convertino, Gregorio. 2013. “Keep it Simple: Reward and Task Design in Crowdsourcing.” Paper presented at CHItaly ‘13, Trento, Italy, September 16–20.
Hauser, David J. and Schwarz, Norbert. 2016. “Attentive Turkers: MTurk Participants Perform Better on Online Attention Checks Than do Subject Pool Participants.” Bahavior Research Methods 48 (1): 400407.
Hitlin, Paul. 2016. ‘Research in the Crowdsourcing Age, a Case Study’ Pew Research Center. July 2016. Available at: http://www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/
Ho, Chien-Ju, Slivkins, Aleksandrs, Suri, Diddharth, and Vaughan, Jennifer Wortman. 2015. Incentivizing high quality crowdwork. Paper presented at the International World Wide Web Conference, Florence, Italy, May 18–22.
Horton, John J. and Chilton, Lydia B.. 2010. “The Labor Economics of Paid Crowdsourcing.” Presented at the 11th ACM conference on electronic commerce (pp. 209–218). Cambridge, Massachusetts: ACM.
Huff, Connor and Tingley, Dustin. 2015. “‘Who are these people?’ Evaluating the demographic characteristics and political preferences of MTurk survey respondents.” Research & Politics 2 (3).
Hus, Joanne W., Schmeiser, Maximilian D., Haggerty, Catherine, and Nelson, Shannon. 2017. “The Effect of Large Monetary Incentives on Survey Completion: Evidence from a Randomized Experiment with the Survey of Consumer Finances.” Public Opinion Quarterly 81 (Fall): 736747.
Ipeirotis, Panagiotis G. 2010. Demographics of Mechanical Turk. NYU Working Paper No. CEDER-10-01. Available at SSRN: https://ssrn.com/abstract=1585030. Accessed March 14, 2018.
Iyengar, Shanto. 2011. “Laboratory Experiments in Political Science.” In Handbook of Experimental Political Science, Eds. Druckman, James, Green, Donald, Kuklinski, James and Lupia, Arthur. New York City: Cambridge University Press.
Kaufman, Nicolas, Schulze, Thimo, and Veit, Daniel. 2011. “More than Fun and Money. Worker Motivation in Crowdsourcing – A Study on Mechanical Turk.” Presented at the during the Proceedings of the Seventeenth Americas Conference on Information Systems. Detroit, Michigan, August 4–7.
Krupnikov, Yanna and Levine, Adam Seth. 2014. “Cross-sample Comparisons and External Validity.” Journal of Experimental Political Science 1: 5980.
Lau, Richard R. 1995. “Information Search During an Election Campaign: Introducing a Process Tracing Methodology for Political Scientists.” In Political Judgment: Structure and Process, Eds. Lodge, M. and McGraw, K. (pp. 179206). Ann Arbor, MI: University of Michigan Press.
Lau, Richard R., Andersen, David J., and Redlawsk, David P.. 2008. “An Exploration of Correct Voting in Recent Presidential Elections.” American Journal of Political Science 52 (2): 395411.
Lau, Richard R. and Redlawsk, David P.. 1997. “Voting Correctly.” American Political Science Review 91 (September): 585599.
Lau, Richard R. and Redlawsk, David P.. 2006. How Voters Decide: Information Processing during Election Campaigns. New York: Cambridge University Press.
Levay, Kevin E., Freese, Jeremy, and Druckman, Jamie. 2016. “The Demographic and Political Composition of Mechanical Turk Samples.” SAGE Open, January-March, 2016, 117.
Mason, Winter and Watts, Duncan. 2009. “Financial Incentives and the “Performance of Crowds.” SIGKDD Explorations 11 (2): 100108.
McCrone, David and Bechhofer, Frank. 2015. Understanding National Identity. Cambridge: Cambridge University Press.
McDermott, Rose. 2002. “Experimental Methods in Political Science.” Annual Review of Political Science 5: 3161.
Morton, Rebecca and Williams, Kenneth. 2010. Experimental Political Science and the Study of Causality: From Nature to the Lab. Cambridge University Press.
Mutz, Dianna. 2011. Population-based Survey Experiments. Princeton, NJ: Princeton University Press.
Paolacci, Gabriele, Chandler, Jesse, and Ipeirotis, Panagiotis. 2010. “Running Experiments on Mechanical Turk.” Judgment and Decision Making, 5 (5).
Rogstadius, Jakob, Kostakos, Vassilis, Kittur, Aniket, Smus, Boris, Laredo, Jim, and Vukovic, Maja. 2011. “An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets.” Presented at the Fifth International AAAI Conference on Weblogs and Social Media.
Rouse, Steven V. 2015. “A Reliability Analysis of Mechanical Turk data.” Computers in Human Behavior, 43: 304307.
Schulze, Thimo, Krug, Simone, and Schader, Martin. 2012. “Workers’ Task Choice in Crowdsourcing and Human Computation Markets.” Presented at the thirty third International Conference on Information Systems, held in Orlando, Fl.
Sears, David O. 1986. “College Sophomores in the Laboratory: Influences on a Narrow Data Base on Social Psychology's View on Human Nature.” Journal of Personality and Social Psychology 51 (3): 515530.
Stewart, Neil, Ungemach, Cristoph, Harris, Adam J. L., Bartels, Daniel M., Newll, Ben R., Paolacci, Gabriele, and Chandler, Jesse. 2015. “The Average Laboratory Samples a Population of 7300 Amazon Mechanical Turk Workers.” Judgement and Decision Making 10 (5): 479491.
Ye, Teng, You, Sangseok, and Robert, Lionel P. 2017. “When does more Money Work? Examining the Role of Perceived Fairness in Pay on the Performance of Crowdworkers.” Presented at the Eleventh International AAAI Conference on Web and Social Media.
Zechmeister, Elizabeth. 2015. “Ethics and Research in Political Science: The Responsibilities of the Researcher and the Profession.” In Ethics and Experiments: Problems and Solutions for Social Scientists and Policy Professional, ed. Desposato, Scott. New York, NY: Routledge.
Zizzo, Daniel. 2010. Experimenter Demand Effects in Economic Experiments. Experimental Economics 13 (75).

Keywords

Type Description Title
WORD
Supplementary materials

Andersen and Lau supplementary material
Online Appendix

 Word (29 KB)
29 KB
UNKNOWN
Supplementary materials

Andersen and Lau Dataset
Dataset

 Unknown

Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples

  • David J. Andersen (a1) and Richard R. Lau (a2)

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed