Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-9q27g Total loading time: 0 Render date: 2024-07-23T13:57:56.387Z Has data issue: false hasContentIssue false

Part II - Methods

Published online by Cambridge University Press:  27 July 2017

Oliver James
Affiliation:
University of Exeter
Sebastian R. Jilke
Affiliation:
Rutgers University, New Jersey
Gregg G. Van Ryzin
Affiliation:
Rutgers University, New Jersey
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Experiments in Public Management Research
Challenges and Contributions
, pp. 57 - 164
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

References

Angrist, J. D. and Pischke, J. S. 2010. ‘The credibility revolution in empirical economics: how better research design is taking the con out of econometrics’, Journal of Economic Perspectives, 24(2): 330.CrossRefGoogle Scholar
Angrist, J. D. and Pischke, J. S. 2014. Mastering ‘Metrics: The Path from Cause to Effect. Princeton, NJ: Princeton University Press.Google Scholar
Benjamini, Y. and Hochberg, Y. 1995. ‘Controlling the false discovery rate: a practical and powerful approach to multiple testing’, Journal of the Royal Statistical Society. Series B (Methodological), 289300.Google Scholar
Brown, S. R. and Melamed, L. E. 1990. Experimental Design and Analysis. (Quantitative Applications in the Social Sciences, No. 74). London: Sage Publications.CrossRefGoogle Scholar
Campbell, D. T. and Stanley, J. C. 1966. Experimental and Quasi-experimental Designs for Research. Chicago: Rand-McNally.Google Scholar
Favero, N. and Bullock, J. 2015. ‘How (not) to solve the problem: an evaluation of scholarly responses to common source bias’, Journal of Public Administration Research and Theory, 25(1): 285308.CrossRefGoogle Scholar
Freedman, D. 2008. ‘On regression adjustment to experimental data’, Advances in Applied Mathematics, 40(2): 180–93.CrossRefGoogle Scholar
Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis and Interpretation. New York: W.W. Norton and Company.Google Scholar
Graham, J. W. 2012. Missing Data: Analysis and Design. New York: Springer.CrossRefGoogle Scholar
Grimmelikhuijsen, S. and Klijn, A. 2015. ‘The effects of judicial transparency on public trust: evidence from a field experiment’, Public Administration, 93: 9951011. doi: 10.1111/padm.12149.CrossRefGoogle Scholar
Groeneveld, S., Tummers, L., Bronkhorst, B., Ashikali, T., and Van Thiel, S. 2014. ‘Quantitative methods in public administration: their use and development through time’, International Public Management Journal, 18(1): 6186.CrossRefGoogle Scholar
Guala, F. 2005. The Methodology of Experimental Economics, Cambridge, Cambridge University Press.CrossRefGoogle Scholar
Holland, P. W. 1986. ‘Statistics and causal inference’. Journal of the American Statistical Association, 81(396): 945–60.Google Scholar
Holm, S. 1979. ‘A simple sequentially rejective multiple test procedure’, Scandinavian Journal of Statistics, 6(2): 6570.Google Scholar
Jakobsen, M. and Jensen, R. 2015. ‘Common method bias in public management studies’, International Public Management Journal, 18(1): 330.CrossRefGoogle Scholar
James, O., Jilke, S., Petersen, C., and Van de Walle, S. 2016. ‘Citizens’ blame of politicians for public service failure: experimental evidence about blame reduction through delegation and contracting’, Public Administration Review, 76(1): 8393.CrossRefGoogle Scholar
James, O. and Van Ryzin, G. 2015. ‘Motivated reasoning about public performance: an experimental study of how citizens judge Obamacare’, Paper presented to PMRA 2015 Annual Conference, University of Minnesota, Minneapolis, MN.Google Scholar
Jilke, S., Van Ryzin, G., and Van de Walle, A. 2015. ‘Responses to decline in marketized public services: an experimental evaluation of choice-overload’, Journal of Public Administration Research and Theory, doi: 10.1093/jopart/muv021.CrossRefGoogle Scholar
Kirk, R. E. 2012. Experimental Design: Procedures for the Behavioral Sciences: Procedures for the Behavioral Sciences. London: Sage Publications.Google Scholar
LaLonde, R. J. 1986. ‘Evaluating the econometric evaluations of training programs with experimental data’, The American Economic Review, 76(4): 604–20.Google Scholar
Lin, W. 2013. ‘Agnostic notes on regression adjustments to experimental data: Reexamining Freedman’s critique’, The Annals of Applied Statistics, 7(1): 295318.CrossRefGoogle Scholar
Meier, K. J. and O’Toole, L. J. 2013. ‘Subjective organizational performance and measurement error: common source bias and spurious relationships’, Journal of Public Administration Research and Theory, 23(2): 429–56.CrossRefGoogle Scholar
Morgan, S. L. and Winship, C. 2015. Counterfactuals and Causal Inference. 2nd Edition. Cambridge: Cambridge University Press.Google Scholar
Murphy, K. R., Myors, B., and Wolach, A., 2014. Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests. New York: Routledge.CrossRefGoogle Scholar
Remler, D. K. and Van Ryzin, G. G. 2015. Research Methods in Practice: Strategies for Description and Causation. London: Sage Publications.Google Scholar
Rubin, D. B. 1974. ‘Estimating causal effects of treatment s in randomized and non-randomized studies’, Journal of Educational Psychology, 66, 688701.CrossRefGoogle Scholar
Shadish, W. R., Cook, T. D., and Campbell, D. T. 2002. Experimental and Quasi-experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin Company.Google Scholar
Stuart, E. A. 2010. ‘Matching methods for causal inference: a review and a look forward’, Statistical Science: A Review Journal of the Institute of Mathematical Statistics, 25(1): 121.CrossRefGoogle Scholar
Wendorf, C. A. 2004. ‘Primer on multiple regression coding: common forms and the additional case of repeated contrasts’, Understanding Statistics, 3(1): 4757.CrossRefGoogle Scholar

References

Allcott, H. 2015. ‘Site selection bias in program evaluation’, Quarterly Journal of Economics 130(3): 1117–65.CrossRefGoogle Scholar
Anderson, D. M. and Edwards, B. C. 2014. ‘Unfulfilled promise: laboratory experiments in public management research’, Public Management Review doi: 10.1080/14719037.2014.943272.CrossRefGoogle Scholar
Arceneaux, K. and Butler, D. M. 2015. ‘How not to increase participation in local government: the advantages of experiments when testing policy interventions’, Public Administration Review Early view.CrossRefGoogle Scholar
Avellaneda, C. N. 2013. ‘Mayoral decision-making: issue salience, decision context, and choice constraint? An experimental study with 120 Latin American mayors’, Journal of Public Administration Research and Theory 23: 631–61.CrossRefGoogle Scholar
Bækgaard, M. Baethge, C. Blom-Hansen, J., Dunlop, C., Esteve, M., Jakobsen, M., Kisida, B., Marvel, J., Moseley, A., Serritzlew, S., Stewart, P., Kjaergaard Thomsen, M. and Wolf, P. 2015. ‘Conducting experiments in public management research: a practical guide’, International Public Management Journal 18(2): 323–42.CrossRefGoogle Scholar
Banerjee, A. V. and Duflo, E. 2014. ‘The experimental approach to development economics’, in Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences, edited by Teele, Dawn Langan, 78114. New Haven, CT: Yale University Press.Google Scholar
Barnow, B. S. 2010. ‘Setting up social experiments: the good, the bad, and the ugly’, Zeitschrift für Arbeitsmarkt Forschung 43: 91105.Google Scholar
Barrett, C. B. and Carter, M. R. 2014. ‘Retreat from radical skepticism: rebalancing theory, observational data, and randomization in development economics’, in Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences, edited by Teele, Dawn Langan, 5877. New Haven, CT: Yale University Press.Google Scholar
Beath, A., Fontina, C. and Enikolopov, R. 2013. ‘Empowering women through development aid: evidence from a field experiment in Afghanistan’, American Political Science Review 107: 540–57.CrossRefGoogle Scholar
Belle, N. 2015. ‘Performance-related pay and the crowding out of motivation in the public sector: a randomized field experiment’, Public Administration Review 75 (2): 230–41.CrossRefGoogle Scholar
Bonetti, S. 1998. ‘Experimental economics and deception’, Journal of Economic Psychology 19: 377–95.CrossRefGoogle Scholar
Boyne, G. A., James, O., John, P. and Petrovsky, N. 2009. ‘Democracy and government performance: holding incumbents accountable in English local governments’, Journal of Politics 71(4): 1273–84.CrossRefGoogle Scholar
Bozeman, B and Scott, P. 1992. ‘Laboratory experiments in public management and management’, Journal of Public Administration Research and Theory 2(4): 293313.Google Scholar
Brewer, G. A. and Brewer, G. A. Jr. 2011. ‘Parsing public/private differences in work motivation and performance: an experimental study’, Journal of Public Administration Research and Theory 21(suppl 3): i347–62.CrossRefGoogle Scholar
Butler, D. M. 2010. ‘Monitoring bureaucratic compliance: using field experiments to improve governance’, Public Sector Digest 2010 (winter): 41–4.Google Scholar
Butler, D. M. and Brockman, D. E. 2011. ‘Do politicians racially discriminate against constituents? A field experiment on state legislators’, American Journal of Political Science 55: 463–77.CrossRefGoogle Scholar
Card, D., DellaVigna, S. and Malmendier, U. 2011. ‘The role of theory in field experiments’, Journal of Economic Perspectives 25(3): 3962.CrossRefGoogle Scholar
Colson, G. Corrigan, J. R., Grebitus, G., Loureiro, M. L. and Rousu, M. C. 2015. ‘Which deceptive practices, if any, should be allowed in experimental economics research? Results from surveys of applied experimental economists and students’, American Journal of Agricultural Economics, Early View 112. doi: 10.1093/ajae/aav067.CrossRefGoogle Scholar
Coppock, A. and Green, D. P. 2015. ‘Assessing the correspondence between experimental results obtained in the lab and field: a review of recent social science research’, Political Science Research and Methods 3: 113–31.CrossRefGoogle Scholar
Cotterill, S. and Richardson, R. 2010. ‘Expanding the use of experiments on civic behavior: experiments with local governments as research partners’, The Annals of the American Academy of Political and Social Science 628: 148–64.CrossRefGoogle Scholar
Druckman, J. N. and Kam, C. D. 2011. ‘Students as experimental participants: a defense of the “narrow database”’, in Handbook of Experimental Political Science, edited by Druckman, James, Green, Donald P., Kuklinski, James H. and Lupia, Arthur, 4157. New York: Cambridge University Press.CrossRefGoogle Scholar
Duflo, E., Glennerster, R. and Kremer, M. 2006. Using Randomization in Development Economics Research: A Toolkit. Cambridge, National Bureau of Economic Research.CrossRefGoogle Scholar
Dunning, T. 2012. Natural Experiments in the Social Sciences. New York: Cambridge University Press.CrossRefGoogle Scholar
Falk, A. and Heckman, J. J. 2009. ‘Lab experiments are a major source of knowledge in the social sciences’, Science 326(5952): 535–8.CrossRefGoogle Scholar
Fisher, R. A. 1926. ‘The arrangement of field experiments’, Journal of the Ministry of Agriculture of Great Britain 33: 503–13.Google Scholar
Fisher, R. A. 1935. Design of Experiments. Edinburgh: Oliver and Boyd.Google Scholar
Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis and Interpretation. New York: Norton.Google Scholar
Green, D. P. and Thorley, D. 2014. ‘Field experimentation and the study of law and policy’, Annual Review of Law and Social Science 10: 5372.CrossRefGoogle Scholar
Guala, F. 2005. The Methodology of Experimental Economics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Harrison, G., Lau, M. and Williams, M. 2002. ‘Estimating individual discount rates in Denmark: a field experiment’, American Economic Review 92(5): 1606–17.CrossRefGoogle Scholar
Harrison, G. W. and List, J. A. 2004. ‘Field experiments’, Journal of Economic Literature 42(4): 1009–55.CrossRefGoogle Scholar
Henrich, J., Heine, S. J. and Norenzayan, A. 2010. ‘Beyond WEIRD: towards a broadbased behavioural science’, Behavioral and Brain Sciences 33(2–3): 111–35.CrossRefGoogle Scholar
Hertwig, R. and Ortmann, R. 2008. ‘Deception in experiments: revisiting the arguments in its defense’, Ethics and Behavior 18 (1): 5982.CrossRefGoogle Scholar
Hess, D. R., Hanmer, M. J. and Nickerson, D. M. 2015. ‘Encouraging local bureaucratic compliance with federal civil rights laws: field experiments with agencies implementing the national voter registration act’, unpublished paper. www.douglasrhess.com/uploads/4/3/7/8/43789009/hess_hanmer_nickerson_may_2015_nvra_compliance.pdf. Accessed 8 August 2015.Google Scholar
Jakobsen, M. 2013. ‘Can government initiatives increase citizen coproduction? Results of a randomized field experiment’, Journal of Public Administration Research and Theory 23(1): 2754.CrossRefGoogle Scholar
Jakobsen, M. and Andersen, S. 2013. ‘Coproduction and equity in public service delivery’, Public Administration Review 73(5): 704–13.CrossRefGoogle Scholar
James, O. 2011. ‘Performance measures and democracy: information effects on citizens in field and laboratory experiments’, Journal of Public Administration Research and Theory 21: 399418.CrossRefGoogle Scholar
James, O. and Moseley, A. 2014. ‘Does performance information about public services affect citizens’ perceptions, satisfaction, and voice behaviour? Field experiments with absolute and relative performance information’. Public Administration 92(2): 493511.CrossRefGoogle Scholar
John, P. (2016) Experimentation in Political Science and Public Policy The Challenge and Promise of Field Trials. London: Routledge.Google Scholar
John, P., Cotterill, S., Moseley, A., Richardson, L., Smith, G., Stoker, G. and Wales, C. 2011. Nudge, Nudge, Think, Think: Experimenting with Ways to Change Civic Behaviour. London: Bloomsbury Academic.CrossRefGoogle Scholar
Levitt, S. D. and List, J. A. 2009. ‘Field experiments in economics: the past, the present, and the future’, European Economic Review 53: 118.CrossRefGoogle Scholar
List, J. 2011. ’Why economists should conduct field experiments and 14 tips for pulling one off’, Journal of Economic Perspectives 25(3): 316.CrossRefGoogle Scholar
List, J. and Metcalfe, R. 2014. ‘Field experiments in the developed world: an introduction’, Oxford Review of Economic Policy 30(4): 585–96.CrossRefGoogle Scholar
Manzi, J. 2012. Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. New York: Basic Books.Google Scholar
Margetts, H. Z. 2011. ‘Experiments for public management research’, Public Management Review 13(2): 189208.CrossRefGoogle Scholar
McClendon, G. H. 2012. ‘Ethics of using public officials as field experiment subjects’, Newsletter of the APSA Experimental Section 3: 1320.Google Scholar
Medical Research Council 1948. ‘Streptomycin treatment of pulmonary tuberculosis’, British Medical Journal 2(4582): 769–82.Google Scholar
Moher, D., Hopewell, S., Schulz, K. F., Montori, V., Gøtzsche, P. C., Devereaux, P. J., Elbourne, D., Egger, M., Altman, D. G. 2010. ‘CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomized trials’, British Medical Journal 340: c869.CrossRefGoogle Scholar
Moseley, A., James, O., John, P. Richardson, L. Ryan, M. and Stoker, G. 2015. ‘Can approaches shown to increase monetary donations to charity also increase voluntary donations of time? Evidence from field experiments using social information’, Unpublished working paper.Google Scholar
Moseley, A. and Stoker, G. 2015. ‘Putting public policy defaults to the test: the case of organ donor registration’, International Public Management Journal 18(2): 246–64.CrossRefGoogle Scholar
Mutz, D. C. 2011. Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.Google Scholar
Olken, B. 2007. ‘Monitoring corruption: evidence from a field experiment in Indonesia’, Journal of Political Economy 115: 200–49.CrossRefGoogle Scholar
Orr, L. L. 1999. Social Experiments: Evaluating Public Programs with Experimental Methods. Thousand Oaks, CA: Sage.Google Scholar
Pawson, R. and Tilley, N. 1997. Realistic Evaluation. London: Sage.Google Scholar
Riccio, J. and Bloom, H. S. 2002. ‘Extending the reach of randomized social experiments: new directions in evaluations of American welfare-to-work and employment initiatives’, Journal of the Royal Statistical Society: Series A (Statistics in Society) 165: 1330.CrossRefGoogle Scholar
Rothwell, P. M. 2015. ‘External validity of randomized controlled trials: to whom do the results of this trial apply?The Lancet 365: 8293.CrossRefGoogle Scholar
Rousu, M. C., Colson, G., Corrigan, J. R., Grebitus, C. and Loureiro, M. L. 2015. ‘Deception in experiments: towards guidelines on use in applied economics research’, Applied Economic Perspectives and Psychology 37(3): 524–36.Google Scholar
Salovey, P. and Williams-Piehota, P. 2004. ‘Field experiments in social psychology message framing and the promotion of health protective behaviors’, American Behavioral Scientist 47(5): 488505.CrossRefGoogle Scholar
Salsburg, D. 2001. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. New York WH Freeman.Google Scholar
Shadish, William R., Cook, T. D. Thomas D., and Campbell, Donald T. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin Company.Google Scholar
Shafir, E. (ed.) 2013. The Behavioral Foundations of Public Policy. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Sherman, L. W. and Rogan, D. P. 1995. ‘Deterrent effects of police raids on crack houses: a randomized, controlled experiment’, Justice Quarterly 12: 755–81.Google Scholar
Sherman, L., Rogan, D., Edwards, T., Whipple, R., Shreve, D., Witcher, D., Trimble, W., The Street Narcotics Unit, Velke, R., Blumberg, M., Beatty, A. and Bridgeforth, C. 1995. ‘Deterrent effects of police raids on crack houses: a randomized, controlled experiment’, Justice Quarterly 12: 755–81.Google Scholar
Sherman, L. W. and Weisburd, D. 1995. ‘General deterrent effects of police patrol in crime hot spots: a randomized, controlled trial’, Justice Quarterly 12(4): 635–48.Google Scholar
Soloman, P., Cavanaugh, M. M. and Draine, J. 2009. Randomized Controlled Trials: Design and Implementation for Community Based Psychosocial Interventions. Oxford: Oxford University Press.CrossRefGoogle Scholar
Teele, D. L. 2015. ‘Reflections on the ethics of field experiments’, in Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences, edited by Teele, Dawn Langan, 115–40. New Haven, CT: Yale University Press.Google Scholar
Torgerson, D. J. and Torgerson, C. J. 2008. Designing Randomized Trials in Health, Education and the Social Sciences. Basingstoke: Palgrave.CrossRefGoogle Scholar
Vivalt, E. 2015. ‘Heterogeneous treatment effects in impact evaluation’, American Economic Review: Papers & Proceedings 105: 467–70.CrossRefGoogle Scholar
Worthy, B., John, P. and Vannoni, M. 2015. ‘Information requests and local accountability: an experimental analysis of local parish responses to an informational campaign’, Unpublished paper.Google Scholar
Yates, F. 1964. ‘Sir Ronald Fisher and the design of experiments’, Biometrics 20: 307–21.CrossRefGoogle Scholar

References

Auspurg, K. and Hinz, T. 2014. Factorial Survey Experiments. Thousand Oaks, CA: Sage Publications.Google Scholar
Andrews, R., Boyne, G. and Walker, R. 2006. ‘Strategy content and organizational performance: an empirical analysis’, Public Administration Review, 66(1): 5263.CrossRefGoogle Scholar
Bishop, G. and Smith, A. 2001. ‘Response-order effects and the early Gallup split-ballots’, Public Opinion Quarterly, 65(4): 479505.CrossRefGoogle Scholar
Bishop, G. and Smith, A. 1991. ‘Gallup split ballot experiments’, The Public Perspective, July/August: 25–7.Google Scholar
Blair, G. and Imai, K. 2012. ‘Statistical analysis of list experiments’, Political Analysis, 20: 4777.CrossRefGoogle Scholar
Bohannon, J. 2011. ‘Social science for pennies’, Science, 334(6054): 307.CrossRefGoogle ScholarPubMed
Bouwman, R. and Grimmelikhuijsen, S. 2016. ‘Experimental public administration from 1992 to 2014: a systematic literature and ways forward’, International Journal of Public Sector Management, 29(2).CrossRefGoogle Scholar
Brewer, G. and Walker, R. 2010. ‘The impact of red tape on governmental performance: an empirical analysis’, Journal of Public Administration Research and Theory, 20(1): 233–57.CrossRefGoogle Scholar
Bullock, J., Stritch, J. and Rainey, H. 2015. ‘International comparison of public and private employees’ work motives, attitudes, and perceived rewards’, Public Administration Review, 75(3): 489–97.CrossRefGoogle Scholar
Cantril, H. 1944. Gauging Public Opinion. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Chaudhuri, A. and Christofides, T. 2007. ‘Item count technique in estimating the proportion of people with a sensitive feature’, Journal of Statistical Planning and Inference, 137(2): 589–93.CrossRefGoogle Scholar
Christenson, D. P. and Glick, D. M. 2012. ‘Crowdsourcing panel studies and real-time experiments in MTurk’, The Political Methodologist, 20(2): 2732.Google Scholar
Christensen, R. and Stritch, J. 2016. ‘Prosocial Dr. Jekyll, meet Deviant Mr. Hyde: exploring the confluence of other-oriented public values and self-centered narcissism’. Paper presented at The Public Values Consortium Workshop, Arizona State University.Google Scholar
Droitcour, J., Caspar, R., Hubbard, M., Parsley, T., Visscher, W. and Ezzati, T. 1991. ‘The item count technique as a method of indirect questioning: a review of its development and a case study application’, in Measurement Errors in Surveys, eds. Briemer, P., Groves, R., Lyberg, L., Mathiowetz, N. and Sudman, S., 185210. New York: John Wiley & Sons.Google Scholar
Gaines, B. and Kuklinski, J. 2011. ‘Treatment effects’, in Cambridge Handbook of Experimental Political Science, eds. Druckman, J., Green, D., Kuklinski, J. and Lupia, A.. Cambridge: Cambridge University Press.Google Scholar
Gaines, B., Kuklinski, J. and Quirk, P. 2007. ‘The logic of the survey experiment reexamined’, Political Analysis, 15: 120.CrossRefGoogle Scholar
Glynn, A. 2013. ‘What can we learn with statistical truth serum? Design and analysis of the list experiment’, Public Opinion Quarterly, 77: 159–72.CrossRefGoogle Scholar
Goodman, J., Cryder, C. and Amar, C. 2013. ‘Data collection in a flat world: the strengths and weaknesses of mechanical Turk samples’, Journal of Behavioral Decision Making, 26(3): 213–24.CrossRefGoogle Scholar
Gossen, S. 2014. Social Desirability in Survey Research: Can the List Experiment Provide the Truth? PhD Dissertation. Phillips-University Marburg.Google Scholar
Green, D. and Holger, K. 2012. ‘Modeling heterogeneous treatment effects in survey experiments with Bayesian additive regression trees’. Public Opinion Quarterly, 76(3): 491511.CrossRefGoogle Scholar
Green, P., Krieger, A. and Wind, Y. 2001. ‘Thirsty years of conjoint analysis: reflections and prospects’, in Marketing Research Modelling: Progress and Prospects, eds. Wind, Y. and Green, P., 117–39. New York: Springer.Google Scholar
Green, P. and Rao, V. 1971. ‘Conjoint measurement for quantifying judgemental data’, Journal of Marketing Research, 8: 355–63.Google Scholar
Groeneveld, S., Tummers, L., Bronkhorst, B., Ashikali, T. and Van Thiel, S. 2015. ‘Quantitative methods in public administration: their use and development through time’, International Public Management Journal, 18(1): 6186.CrossRefGoogle Scholar
Hainmueller, J., Hangartner, D. and Yamamoto, T. 2015. ‘Validating vignette and conjoint survey experiments against real-world behavior’, Proceedings of the National Academy of Sciences, 112(8): 23952400.CrossRefGoogle ScholarPubMed
Hainmueller, J., Hopkins, D. and Yamamoto, T. 2014. ‘Causal inference in conjoint analysis: understanding multidimensional choices via stated preference experiments’, Political Analysis, 22: 130.CrossRefGoogle Scholar
Hansen, K., Olsen, A. and Bech, M. 2015. ‘Cross-national yardstick comparisons: a choice experiment on a forgotten voter heuristic’, Political Behavior, 37(4): 767–89.CrossRefGoogle Scholar
Huff, C. and Tingley, D. 2015. ‘“Who are these people?” Evaluating the demographic characteristics and political preferences of MTurk survey respondents’, Research & Politics, 2(3): 2053168015604648.CrossRefGoogle Scholar
James, O. and Moseley, A. 2014. ‘Does performance information about public services affect citizens’ perceptions, satisfaction, and voice behaviour? Field experiments with absolute and relative performance information’, Public Administration, 92(2): 493511.CrossRefGoogle Scholar
Jasso, G. 2006. ‘Factorial survey methods for studying beliefs and judgments’, Sociological Methods & Research, 34(3): 334423.CrossRefGoogle Scholar
Jilke, S. 2015. ‘Choice and equality: are vulnerable citizens worse off after liberalization reforms?Public Administration, 93(1): 6885.CrossRefGoogle Scholar
Jilke, S., Meuleman, B. and Van de Walle, S. 2015. ‘We need to compare, but how? Measurement equivalence in comparative public administration’, Public Administration Review, 75(1): 3648.CrossRefGoogle Scholar
Jilke, S. and Tummers, L. 2016. ‘Cues of deservingness and street-level decision making: evidence from a conjoint experiment among US teachers’. Paper presented at IRSPM 2016 in Hong Kong.Google Scholar
Jilke, S., Van Ryzin, G. and Van de Walle, S. 2015. ‘Responses to decline in marketized public services: an experimental evaluation of choice-overload’, Journal of Public Administration Research and Theory, online first.CrossRefGoogle Scholar
Kaufmann, W. and Feeney, M. K. 2012. ‘Objective formalization, perceived formalization and perceived red tape’, Public Management Review, 14(8): 11951214.CrossRefGoogle Scholar
Kim, S. H. and Kim, S. 2013. ‘National culture and social desirability bias in measuring public service motivation’, Administration & Society, online first.CrossRefGoogle Scholar
Kjeldsen, A. M. and Botcher Jacobsen, C. 2013. ‘Public service motivation and employment sector: attraction or socialization?Journal of Public Administration Research and Theory, 23(4): 899926.CrossRefGoogle Scholar
Kuklinski, J., Cobb, M. and Gilens, M. 1997. ‘Racial attitudes and the “new South”’, Journal of Politics, 59(2): 323–49.CrossRefGoogle Scholar
Marvel, J. 2015. ‘Unconscious bias in citizen’s evaluations of public sector performance’, Journal of Public Administration Research and Theory, online first.CrossRefGoogle Scholar
Meier, K. and O’Toole, L. 2013. ‘Subjective organizational performance and measurement error: common source bias and spurious relationships’, Journal of Public Administration Research and Theory 23(2): 429–56.CrossRefGoogle Scholar
Nielsen, P. and Bækgaard, M.Performance information, blame avoidance, and politicians’ attitudes to spending and reform: evidence from an experiment’, Journal of Public Administration Research and Theory 25(2): 545–69.Google Scholar
Miller, J. 1984. A New Survey Technique for Studying Deviant Behavior. PhD dissertation. George Washington University.Google Scholar
Miller, J. L., Rossi, P. H., Simpson, J. E. and Simpson, J. O. N. E. 1991. ‘Felony punishments : a factorial survey of perceived justice in criminal sentencing’, Criminal Sentencing, 82(2).Google Scholar
Mutz, D., 2011. Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.Google Scholar
Noelle-Neumann, E. 1970. ‘Wanted: rules for wording structured questionnaires’, Public Opinion Quarterly, 34(2): 191201.CrossRefGoogle Scholar
Oppenheimer, D., Meyvis, T. and Davidenko, N. 2009. ‘Instructional manipulation checks: detecting satisficing to increase statistical power’, Journal of Experimental Social Psychology, 45: 867–72.CrossRefGoogle Scholar
Paolacci, G. and Chandler, J. 2014. ‘Inside the Turk: understanding mechanical Turk as a participant pool’, Current Directions in Psychological Science, 23(3): 184–8.CrossRefGoogle Scholar
Payne, S. 1951. The Art of Asking Questions. Princeton, NJ: Princeton University Press.Google Scholar
Presser, S. and Schuman, H. 1980. ‘The measurement of a middle position in attitude surveys’, Public Opinion Quarterly, 44(1): 7085.CrossRefGoogle Scholar
Raghavarai, D., Wiley, J. and Chitturi, P. 2010. Choice-Based Conjoint Analysis: Models and Design. Boca Raton, FL: CRC Press.CrossRefGoogle Scholar
Rossi, P. H. and Nock, S. L. 1982. Measuring Social Judgments: The Factorial Survey Approach. London: Sage Publications.Google Scholar
Sniderman, P. M. 2011. ‘The logic and design of the survey experiment: an autobiography of a methodological innovation’, in Cambridge Handbook of Experimental Political Science, eds. Druckman, J., Green, D., Kuklinski, J. and Lupia, A.. Cambridge: Cambridge University Press.Google Scholar
Steijn, B. 2008. ‘Person-environment fit and public service motivation’, International Public Management Journal, 11(1): 1327.CrossRefGoogle Scholar
Suri, S. and Watts, S. 2011. ‘Cooperation and contagion in web-based, networked public goods experiments’, PLoS ONE, 6(3): e16836.CrossRefGoogle ScholarPubMed
Tourangeau, R., Rips, L., and Rasinski, K. 2000. The Psychology of Survey Response. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Tummers, L., Weske, U., Bouwman, R. and Grimmelikhuijsen, S. 2016. ‘The impact of red tape on citizen satisfaction: an experimental study’, International Public Management Journal, 19(3): 320–41.CrossRefGoogle Scholar
Van Loon, N. M. 2016. ‘Is public service motivation related to overall and dimensional work-unit performance as indicated by supervisors?International Public Management Journal, 19(1): 78110.CrossRefGoogle Scholar
Van Ryzin, G. 2007. ‘Pieces of a puzzle: linking government performance, citizen satisfaction, and trust’, Public Performance & Management, Review, 30(4): 521–35.CrossRefGoogle Scholar
Van Ryzin, G. 2013. ‘An experimental test of the expectancy-disconfirmation theory of citizen satisfaction’, Journal of Policy Analysis and Management 32(3): 567614.CrossRefGoogle Scholar
Van de Walle, S., Van Roosbroek, S. and Bouckaert, G. 2008. ‘Trust in the public sector: is there any evidence for a long-term decline?’, International Review of Administrative Sciences, 74(1): 4764.CrossRefGoogle Scholar
Van de Walle, S., and Van Ryzin, G. 2011. The order of questions in a survey on citizen satisfaction with public services: lessons from a split-ballot experiment’, Public Administration, 89(4): 1436–50.CrossRefGoogle Scholar
Vogel, D. and Kroll, A. 2016. ‘The stability and change of PSM-related values across time: testing theoretical expectations against panel data’, International Public Management Journal, 19(1): 5377.CrossRefGoogle Scholar
Walker, R., Andrews, R., Boyne, G., Meier, K. and O’Toole, L. 2010. ‘Wakeup call: strategic management, network alarms, and performance’, Public Administration Review, 70(5): 731–41.CrossRefGoogle Scholar
Weibel, A., Rost, K. and Osterloh, M. 2010. ‘Pay for performance in the public sector – benefits and (hidden) costs’, Journal of Public Administration Research and Theory, 20(2): 387412.CrossRefGoogle Scholar

References

Alatas, V., Cameron, L., Chaudhuri, A., Erkal, N., and Gangadharan, L. 2009. ‘Subject pool effects in a corruption experiment: a comparison of Indonesian public servants and Indonesian students’, Experimental Economics 12 (1): 113–32.CrossRefGoogle Scholar
Anderson, C. A. and Bushman, B. J. 1997. ‘External validity of “trivial” experiments: the case of laboratory aggression’, Review of General Psychology 1 (1): 1941.CrossRefGoogle Scholar
Anderson, D. M. and Edwards, B. C. 2015. ‘Unfulfilled promise: laboratory experiments in public administration research’, Public Management Review 17 (10): 125.CrossRefGoogle Scholar
Aronson, E., Brewer, M. B., and Carlsmith, M. J. 1985. ‘Experimentation in social psychology’, in Lindzey, G. and Aronson, E. (eds.) Handbook of Social Psychology, Vol. 2. New York: Random House, pp. 441–86.Google Scholar
Baekgaard, M., Baethge, C., Blom-Hansen, J., Dunlop, C., Esteve, M., M. J., Kisida, B., et al. 2015. ‘Conducting experiments in public management research: a practical guide’, International Public Management Journal 18 (2): 323–42.CrossRefGoogle Scholar
Berg, J., Dickhaut, J., and McCabe, K. 1995. ‘Trust, reciprocity, and social history’, Games and Economic Behavior 10 (1): 122–42.CrossRefGoogle Scholar
Bock, O., Nicklisch, A., and Baetge, I. 2012. ‘Hroot: Hamburg registration and organization online tool’, WiSo-HH Working Paper Series, no. 1.Google Scholar
Bozeman, B. and Scott, P. 1992. ‘Laboratory experiments in public policy and management’, Journal of Public Administration Research and Theory 2 (3): 293313.Google Scholar
Brennan, M. and Charbonneau, J. 2009. ‘Improving mail survey response rate using chocolate and replacement questionnaires’, Public Opinion Quarterly 73 (2): 368–78.CrossRefGoogle Scholar
Brewer, G. A. and Brewer, G. A. Jr. 2011. ‘Parsing public/private differences in work motivation and performance: an experimental study’, Journal of Public Administration Research and Theory 21 (suppl 3): i347–62.CrossRefGoogle Scholar
Chen, D., Schonger, M., and Wickens, C. 2015. ‘oTree – an open-source platform for laboratory, online, and field experiments – otree.pdf’, www.otree.org/oTree.pdf (accessed November 29).CrossRefGoogle Scholar
Cook, K. S. and Yamagishi, T. 2008. ‘A defense of deception on scientific grounds’, Social Psychology Quarterly 21 (3): 215–21.Google Scholar
De Fine Licht, J. 2014. ‘Policy area as a potential moderator of transparency effects: an experiment’, Public Administration Review 74 (3): 361–71.CrossRefGoogle Scholar
Delfgaauw, J. and Dur, R. 2010. ‘Managerial talent, motivation, and self-selection into public management’, Journal of Public Economics 94 (9): 654–60.CrossRefGoogle Scholar
Dickson, E. S. 2011. ‘Economics vs. psychology experiments: stylization, incentives, and deception’, in Druckman, , Green, , Kuklinski, , and Lupia, (eds.), pp. 5870.Google Scholar
Dobbins, G. H., Lane, I. M., and Steiner, D. D. 1988. ‘A note on the role of laboratory methodologies in applied behavioral research: don’t throw out the baby with the bath water’, Journal of Organizational Behavior 9 (3): 281–6.CrossRefGoogle Scholar
Druckman, J. N., Green, D. P., Kuklinski, J. H., and Lupia, A. 2011. Cambridge Handbook of Experimental Political Science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Druckman, J. N. and Kam, C. D. 2011. ‘Students as experimental participants: a defense of the narrow data base’, in Druckman, , Green, , Kuklinski, , and Lupia, (eds.), pp. 4157.Google Scholar
Druckman, J. N. and Leeper, T. J. 2012. ‘Learning more from political communication experiments: pretreatment and its effects’, American Journal of Political Science 56 (4): 875–96.CrossRefGoogle Scholar
Esteve, M., Urbig, D., van Witteloostuijn, A., and Boyne, G. 2016. ‘Prosocial behavior and public service motivation’, Public Administration Review 76 (1): 177–87.CrossRefGoogle Scholar
Esteve, M., van Witteloostuijn, A., and Boyne, G. 2015. ‘The effects of public service motivation on collaborative behavior: evidence from three experimental games’, International Public Management Journal 18 (2): 171–89.CrossRefGoogle Scholar
Fischbacher, U. 2007. ‘Z-Tree: Zurich toolbox for ready-made economic experiments’, Experimental Economics 10 (2): 171–8.CrossRefGoogle Scholar
Gailmard, S. and Patty, J. W. 2012. ‘Formal models of bureaucracy’, Annual Review of Political Science, 15: 353–77.CrossRefGoogle Scholar
Gill, D. and Prowse, V. 2015. A Novel Computerized Real Effort Task Based on Sliders. (http://users.ox.ac.uk/~nuff0229/GillProwseSliders.pdf).Google Scholar
Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., and Soutter, C. L. 2000. ‘Measuring trust’, The Quarterly Journal of Economics 115 (3): 811–46.CrossRefGoogle Scholar
Greiner, B. 2015. ‘Subject pool recruitment procedures: organizing experiments with ORSEE’, Journal of the Economic Science Association 1 (1): 114–25.CrossRefGoogle Scholar
Grimmelikhuijsen, S. and Klijn, A. 2015. ‘The effects of judicial transparency on public trust: evidence from a field experiment’, Public Administration 93(4): 9951011.CrossRefGoogle Scholar
Grimmelikhuijsen, S., Porumbescu, G., Hong, B., and Im, T. 2013. ‘The effect of transparency on trust in government: a cross-national comparative experiment’, Public Administration Review 73 (4): 575–86.CrossRefGoogle Scholar
Harrison, G. W. and List, J. A. 2004. ‘Field experiments’, Journal of Economic Literature 42 (4): 1009–55.CrossRefGoogle Scholar
Hertwig, R. and Ortmann, A. 2001. ‘Experimental practices in economics: a methodological challenge for psychologists?Behavioral and Brain Sciences 24 (3): 383403.CrossRefGoogle ScholarPubMed
Iyengar, S. 2011. ‘Laboratory experiments in political science’, in Druckman, , Green, , Kuklinski, , and Lupia, (eds.), pp. 7388.Google Scholar
James, O. 2011. ‘Performance measures and democracy: information effects on citizens in field and laboratory experiments’, Journal of Public Administration Research and Theory 21 (3): 399418.CrossRefGoogle Scholar
Kim, S. H. and Kim, S. 2014. ‘National culture and social desirability bias in measuring public service motivation’, Administration and Society, published online first.CrossRefGoogle Scholar
Kittel, B., Luhan, W. J., and Morton, R. B. (eds.) 2012. Experimental Political Science: Principles and Practices. Basingstoke: Palgrave Macmillan.CrossRefGoogle Scholar
Knott, J. H., Miller, G. J., and Verkuilen, J. 2003. ‘Adaptive incrementalism and complexity: experiments with two-person cooperative signaling games’, Journal of Public Administration Research and Theory 13 (3): 341–65.CrossRefGoogle Scholar
Kruglanski, A. W. 1975. ‘The human subject in the psychology experiment: fact and artifact’, Advances in Experimental Social Psychology 8: 101–47.CrossRefGoogle Scholar
Levitt, S. D. and List, J. A. 2007. ‘What do laboratory experiments measuring social preferences reveal about the real world?The Journal of Economic Perspectives 21 (2): 153–74.Google Scholar
Lucas, J. W. 2003. ‘Theory-testing, generalization, and the problem of external validity’, Sociological Theory 21 (3): 236–53.CrossRefGoogle Scholar
Luechinger, S., Stutzer, A., and Winkelmann, R. 2010. ‘Self-selection models for public and private sector job satisfaction’, Research in Labor Economics 30: 233–51.CrossRefGoogle Scholar
Maccoby, E. E. and Maccoby, N. 1954. ‘The interview: a tool of social science’, in Lindzey, and Aronsen, (eds.), Handbook of Social Psychology, pp. 449–87.Google Scholar
Margetts, H. Z. 2011. ‘Experiments for public management research’, Public Management Review 13 (2): 189208.CrossRefGoogle Scholar
McCarty, N. and Meirowitz, A. 2007. Political Game Theory: An Introduction. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
McCubbins, M. D., Noll, R. G., and Weingast, B. R. 1987. ‘Administrative procedures as instruments of political control’, Journal of Law, Economics, and Organization 3 (2): 243–77.Google Scholar
Milgram, S. 1974. Obedience to Authority: An Experimental View. New York: Harper and Row.Google Scholar
Morrow, J. D. 1994. Game Theory for Political Scientists. Princeton, NJ: Princeton University Press.Google Scholar
Morton, R. B. 1999. Methods and Models. A Guide to the Empirical Analysis of Formal Models in Political Science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Morton, R. B. and Williams, K. C. 2010. Experimental Political Science and the Study of Causality: From Nature to the Lab. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Niskanen, W. A. 1971. Bureaucracy and Representative Government. Chicago: Aldine Atherton.Google Scholar
Nutt, P. C. 2006. ‘Comparing public and private sector decision-making practices’, Journal of Public Administration Research and Theory 16 (2): 289318.CrossRefGoogle Scholar
Ostrom, E. and Walker, J. 2005. Trust and Reciprocity: Russell Sage Foundation Series on Trust, Vol. 6. New York: Russell Sage Foundation.Google Scholar
Pott, C. R. 1991. ‘Will economics become an experimental science?Southern Economic Journal 57: 450–61.Google Scholar
Remus, W. 1986. ‘Graduate students as surrogates for managers in experiments on business decision making’, Journal of Business Research 14 (1): 1925.CrossRefGoogle Scholar
Roth, A. E. 1995. ‘Introduction to experimental economics’, in Kagel, J. H. and Roth, A. E. (eds.), The Handbook of Experimental Economics. Princeton, NJ: Princeton University Press, pp. 3109.Google Scholar
Scott, P. G. 1997. ‘Assessing determinants of bureaucratic discretion: an experiment in street-level decision making’, Journal of Public Administration Research and Theory 7 (1): 3558.CrossRefGoogle Scholar
Sell, J. 2008. ‘Introduction to deception debate’, Social Psychology Quarterly 71 (3): 213.CrossRefGoogle Scholar
Shadish, W. R., Cook, T. D., and Campbell, D. T. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin Company.Google Scholar
Shepsle, K. A. 1992. ‘Bureaucratic drift, coalitional drift, and time consistency: a comment on Macey’, Journal of Law, Economics, and Organization 8 (1): 111–18.Google Scholar
Smith, J. 2016. ‘The motivational effects of mission matching: a lab-experimental test of a moderated mediation model’, Public Administration Review, 76(4): 626–37.CrossRefGoogle Scholar
Smith, V. L. 1976. ‘Experimental economics: induced value theory’, The American Economic Review 66 (2): 274–9.Google Scholar
Smith, V. L. 1982. ‘Microeconomic systems as an experimental science’, The American Economic Review 72 (5): 923–55.Google Scholar
Tepe, M. 2016. ‘In public servants we trust? A behavioral experiment on public service motivation and trust among students of public administration, business sciences and law’, Public Management Review 18 (4): 508–38.CrossRefGoogle Scholar
Van de Walle, S. and Van Ryzin, G. G. 2011. ‘The order of questions in a survey on citizen satisfaction with public services: lessons from a split-ballot experiment’, Public Administration 89 (4): 1436–50.CrossRefGoogle Scholar
Webster, M. and Sell, J. 2014. Laboratory Experiments in the Social Sciences. London: Elsevier.Google Scholar
Weingast, B. R. and Moran, M. J. 1983. ‘Bureaucratic discretion or congressional control? Regulatory policymaking by the federal trade commission’, Journal of Political Economy 91 (5): 765800.CrossRefGoogle Scholar
Wilde, L. 1981. ‘On the use of laboratory experiments in economics’, in Pitt, Joseph. (ed.), The Philosophy of Economics. Dordrecht: Reidel, pp. 137–43.Google Scholar
Wilson, R. and Eckel, C. 2011. ‘Trust and social exchange’, in Druckman, , Green, , Kuklinski, , and Lupia, (eds.), pp. 243–57.Google Scholar
Woon, J. 2012. ‘Laboratory tests of formal theory and behavioral inference’, in Kittel, , Luhan, , and Morton, (eds.), pp. 5471.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats