Hostname: page-component-7479d7b7d-c9gpj Total loading time: 0 Render date: 2024-07-11T15:49:32.605Z Has data issue: false hasContentIssue false

Unfolding the Black Box of Questionable Research Practices: Where Is the Line Between Acceptable and Unacceptable Practices?

Published online by Cambridge University Press:  05 March 2020

Christian Linder
Affiliation:
ESCP Europe Business School
Siavash Farahbakhsh
Affiliation:
Free University of Bozen-Bolzano

Abstract

Despite the extensive literature on what questionable research practices (QRPs) are and how to measure them, the normative underpinnings of such practices have remained less explored. QRPs often fall into a grey area of justifiable and unjustifiable practices. Where to precisely draw the line between such practices challenges individual scholars and this harms science. We investigate QRPs from a normative perspective using the theory of communicative action. We highlight the role of the collective in assessing individual behaviours. Our contribution is a framework that allows identification of when particular actions cross over from acceptable to unacceptable practice. Thus, this article provides grounds for developing scientific standards to raise the quality of scientific research.

Type
Article
Copyright
© 2020 Business Ethics Quarterly

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Aguinis, H., Cascio, W. F., & Ramani, R. S. 2017. Science’s reproducibility and replicability crisis: International business is not immune. Journal of International Business Studies, 18: 155.Google Scholar
Allen, G. N., Ball, N. L., & Smith, H. J. 2011. Information systems research behaviors: What are the normative standards? MIS Quarterly, 35: 533551.CrossRefGoogle Scholar
Altman, D. G. 1980. Statistics and ethics in medical research. British Medical Journal, 281: 13361338.CrossRefGoogle ScholarPubMed
Anderson, M. S. 2007. Collective openness and other recommendations for the promotion of research integrity. Science and Engineering Ethics, 13: 387394.CrossRefGoogle ScholarPubMed
Anderson, M. S., Martinson, B. C., & Vries, R. de 2007. Normative dissonance in science: Results from a national survey of U.S. scientists. Journal of Empirical Research on Human Research Ethics, 2: 314.CrossRefGoogle Scholar
Anderson, M. S., Ronning, E. A., Vries, R. de, & Martinson, B. C. 2010. Extending the Mertonian norms: Scientists’ subscription to norms of research. Journal of Higher Education, 81: 366393.CrossRefGoogle ScholarPubMed
Anonymous, . 2013. Unreliable research: Trouble at the lab. The Economist, 409: 2327.Google Scholar
Audretsch, D. B., & Belitski, M. 2017. Entrepreneurial ecosystems in cities: Establishing the framework conditions. Journal of Technology Transfer, 42: 10301051.CrossRefGoogle Scholar
Austin, J. L. 1962. How to do things with words (2nd ed.). Cambridge, MA: Harvard University Press.Google Scholar
Bakker, M., & Wicherts, J. M. 2014. Outlier removal and the relation with reporting errors and quality of psychological research. PloS one, 9: e103360.CrossRefGoogle ScholarPubMed
Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A. A., & Adkins, C. L. 2016a. Questions about questionable research practices in the field of management. Journal of Management, 42: 520.CrossRefGoogle Scholar
Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. 2016b. Editorial: Evidence on questionable research practices: The good, the bad, and the ugly. Journal of Business and Psychology, 31: 323338.CrossRefGoogle Scholar
Bastardi, A., Uhlmann, E. L., & Ross, L. 2011. Wishful thinking: Belief, desire, and the motivated evaluation of scientific evidence. Psychological Science, 22: 731732.CrossRefGoogle ScholarPubMed
Bedeian, A. G., Taylor, S. G., & Miller, A. N. 2010. Management science on the credibility bubble: Cardinal sins and various misdemeanors. Academy of Management Learning & Education, 9: 715725.Google Scholar
Begley, C. G., & Ellis, L. M. 2012. Drug development: Raise standards for preclinical cancer research. Nature, 483: 531533.CrossRefGoogle ScholarPubMed
Bergh, D. D., Sharp, B. M., Aguinis, H., & Li, M. 2017. Is there a credibility crisis in strategic management research? Evidence on the reproducibility of study findings. Strategic Organization, 15: 423436.CrossRefGoogle Scholar
Böckenholt, U., & van der Heijden, P. G. M. 2007. Item randomized-response models for measuring noncompliance: Risk-return perceptions, social influences, and self-protective responses. Psychometrika, 72: 245262.CrossRefGoogle Scholar
Bowie, N. E. 2017. Business ethics: A Kantian perspective. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Brady, F. N., & Dunn, M. G. 1995. Business meta-ethics: An analysis of two theories. Business Ethics Quarterly, 5: 385398.CrossRefGoogle Scholar
Butler, N., Delaney, H., & Spoelstra, S. 2017. The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education, 16: 94109.CrossRefGoogle Scholar
Cialdini, R. B. 2007. Descriptive social norms as underappreciated sources of social control. Psychometrika, 72: 263268.CrossRefGoogle Scholar
Clapham, P. 2005. Publish or perish. BioScience, 55: 390391.CrossRefGoogle Scholar
Cohen, J. 1994. The earth is round (p < .05). American Psychologist, 49: 9971003.CrossRefGoogle Scholar
Cumming, G. 2014. The new statistics: Why and how. Psychological science, 25: 729.CrossRefGoogle ScholarPubMed
David, P. A. 2004. Understanding the emergence of ‘open science’ institutions: Functionalist economics in historical context. Industrial and Corporate Change, 13: 571589.CrossRefGoogle Scholar
Durkheim, É. 2008 [1912]. The elementary forms of the religious life. Mineola, NY: Dover Publications.Google Scholar
Edwards, D., & Potter, J. 1993. Language and causation: A discursive action model of description and attribution. Psychological Review, 100: 2341.CrossRefGoogle Scholar
Eich, E. 2014. Editorial - Business not as usual. Psychological Science, 25: 36.CrossRefGoogle Scholar
Emerson, G. B., Warme, W. J., Wolf, F. M., Heckman, J. D., Brand, R. A., & Leopold, S. S. 2010. Testing for the presence of positive-outcome bias in peer review: A randomized controlled trial. Archives of Internal Medicine, 170: 19341939.CrossRefGoogle ScholarPubMed
Fanelli, D. 2009. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PloS ONE, 4: e5738.CrossRefGoogle ScholarPubMed
Fanelli, D. 2012. Negative results are disappearing from most disciplines and countries. Scientometrics, 90: 891904.CrossRefGoogle Scholar
Ferguson, C. J., & Heene, M. 2012. A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7: 555561.CrossRefGoogle ScholarPubMed
Fiedler, K., & Schwarz, N. 2015. Questionable research practices revisited. Social Psychological and Personality Science, 7: 4552.CrossRefGoogle Scholar
Fisher, R. A. 1925. Statistical methods for research workers. London: Oliver & Boyd.Google Scholar
Franco, A., Malhotra, N., & Simonovits, G. 2014. Publication bias in the social sciences: Unlocking the file drawer. Science, 345: 15021505.CrossRefGoogle ScholarPubMed
Frankena, W. 1973. Ethics. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Gadbury, G. L., & Allison, D. B. 2012. Inappropriate fiddling with statistical analyses to obtain a desirable p-value: Tests to detect its presence in published literature. PLoS ONE, 7: e46363.CrossRefGoogle ScholarPubMed
Gilbert, D. U., Rasche, A., & Arnold, D. G. 2007. Discourse ethics and social accountability. Business Ethics Quarterly, 17: 187216.CrossRefGoogle Scholar
Goldfarb, B., & King, A. A. 2016. Scientific apophenia in strategic management research: Significance tests & mistaken inference. Strategic Management Journal, 37: 167176.CrossRefGoogle Scholar
Habermas, J. 1984. The theory of communicative action (volume 1): Reason and the rationalization of society. Boston, MA: Beacon Press.Google Scholar
Habermas, J. 1987. The theory of communicative action (volume 2): Lifeworld and system: A critique of functionalist reason. Boston, MA: Beacon Press.Google Scholar
Habermas, J. 1990. Moral consciousness and communicative action. Cambridge, MA: MIT Press.Google Scholar
Habermas, J. 1993. On the pragmatic, the ethical, and the moral employments of practical reason. In Habermas, J. (Ed.), Justification and application: 117. Cambridge: Polity Press.Google Scholar
Habermas, J. 1994. Actions, speech acts, linguistically mediated interactions and the lifeworld. In Fløistad, G. (Ed.), Philosophical problems today: 4574. Dordrecht: Springer Netherlands.Google Scholar
Habermas, J. 1996. Between facts and norms. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Habermas, J. 1998. The inclusion of the other: Studies in political theory. Cambridge, MA: MIT Press.Google Scholar
Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. 2015. The extent and consequences of p-hacking in science. PLoS Biology, 13: 115.CrossRefGoogle Scholar
Helfat, C. E. 2016. Stylized facts, empirical research and theory development in management. Strategic Organization, 5: 185192.CrossRefGoogle Scholar
Hollenbeck, J. R., & Wright, P. M. 2017. Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data. Journal of Management, 43: 518.CrossRefGoogle Scholar
Hutton, J. L., & Williamson, P. R. 2000. Bias in meta-analysis due to outcome variable selection within studies. Journal of the Royal Statistical Society: Series C, 49: 359370.CrossRefGoogle Scholar
Ioannidis, J. P. A. 2005. Why most published research findings are false. PLoS medicine, 2: e124.CrossRefGoogle ScholarPubMed
John, L. K., Loewenstein, G., & Prelec, D. 2012. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23: 524532.CrossRefGoogle ScholarPubMed
Mack, E., & Mayer, H. 2016. The evolutionary dynamics of entrepreneurial ecosystems. Urban Studies, 53: 21182133.CrossRefGoogle Scholar
Maner, J. K. 2014. Let’s put our money where our mouth is: If authors are to change their ways, reviewers (and editors) must change with them. Perspectives on Psychological Science, 9: 343351.CrossRefGoogle Scholar
Murayama, K., Pekrun, R., & Fiedler, K. 2014. Research practices that can prevent an inflation of false-positive rates. Personality and Social Psychology Review, 18: 107118.CrossRefGoogle ScholarPubMed
Murphy, K. R., & Aguinis, H. 2019. HARKing: How badly can cherry-picking and question trolling produce bias in published results? Journal of Business and Psychology, 34: 117.CrossRefGoogle Scholar
Nuzzo, R. 2014. Scientific method: Statistical errors. Nature, 506: 150152.CrossRefGoogle ScholarPubMed
O’Neill, O. 1990.Constructions of reason: Explorations of Kant’s practical philosophy. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Pellizzoni, L. 2001. The myth of the best argument: Power, deliberation and reason. British Journal of Sociology, 52: 5986.CrossRefGoogle ScholarPubMed
Prinz, F., Schlange, T., & Asadullah, K. 2011. Believe it or not: How much can we rely on published data on potential drug targets? Nature Reviews: Drug Discovery, 10: 712713.Google ScholarPubMed
Pybus, E. M. 1983. False dichotomies: Right and good. Philosophy, 58: 1927.CrossRefGoogle Scholar
Risse, T. 2004. Global governance and communicative action. Government and Opposition, 39: 288313.CrossRefGoogle Scholar
Rosenberg, M. S. 2005. The file-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta-analysis. Evolution, 59: 464468.CrossRefGoogle ScholarPubMed
Rosenthal, M. R. 1979. The "File Drawer Problem" and tolerance for null results. Psychological Bulletin, 86: 638641.CrossRefGoogle Scholar
Rosenthal, M. R. 1980. Combining probabilities and the file drawer problem. Evaluation in Education, 4: 1821.CrossRefGoogle Scholar
Scherer, A. G., & Palazzo, G. 2007. Toward a political conception of corporate responsibility: Business and society seen from a Habermasian perspective. Academy of Management Review, 32: 10961120.CrossRefGoogle Scholar
Schmidt, S. 2009. Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13: 90100.CrossRefGoogle Scholar
Searle, J. R. 1969. Speech acts: An essay in the philosophy of language. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Sijtsma, K. 2016. Playing with data - or how to discourage questionable research practices and stimulate researchers to do things right. Psychometrika, 81: 115.CrossRefGoogle ScholarPubMed
Simmons, J. P., Nelson, L. D., & Simonsohn, U. 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22: 13591366.CrossRefGoogle ScholarPubMed
Simonsohn, U., Nelson, L. D., & Simmons, J. P. 2014. P-curve: A key to the file-drawer. Journal of Experimental Psychology, 143: 534547.CrossRefGoogle ScholarPubMed
Simonsohn, U., Simmons, J. P., & Nelson, L. D. 2015. Better p-curves: Making p-curve analysis more robust to errors, fraud, and ambitious p-hacking, a reply to Ulrich and Miller (2015). Journal of Experimental Psychology, 144: 11461152.CrossRefGoogle Scholar
Song, F., Eastwood, A., Gilbody, S., Duley, L., & Sutton, A. 2000. Publication and related biases. Health Technology Assessment, 4: 371390.CrossRefGoogle ScholarPubMed
Spigel, B. 2017. The relational organization of entrepreneurial ecosystems. Entrepreneurship Theory and Practice, 41: 4972.CrossRefGoogle Scholar
Steneck, N. 2006. Fostering integrity in research: Definition, current knowledge, and future directions. Science and Engineering Ethics, 12: 5374.CrossRefGoogle Scholar
Sterba, S. K. 2006. Misconduct in the analysis and reporting of data: Bridging methodological and ethical agendas for change. Ethics & Behavior, 16: 305318.CrossRefGoogle Scholar
Sterling, T. D. 1959. Publication decisions and their possible effects on inferences drawn from tests of significance - or vice versa. Journal of the American Statistical Association, 54: 3034.Google Scholar
Tierney, W. G. 1993. Academic freedom and the parameters of knowledge. Harvard Educational Review, 63: 143160.CrossRefGoogle Scholar
Tierney, W. G., & Blumberg, C. Z. 2007. The tensions between academic freedom and institutional review boards. Qualitative Inquiry, 13: 388398.CrossRefGoogle Scholar
Wagenmakers, E.-J. 2007. A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14: 779804.CrossRefGoogle ScholarPubMed
Walseth, L. T., & Schei, E. 2011. Effecting change through dialogue: Habermas’ theory of communicative action as a tool in medical lifestyle interventions. Medicine, health care, and philosophy, 14: 8190.CrossRefGoogle ScholarPubMed
Wood, A. 1999. Kant’s ethical thought. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Ziliak, S. T., & McCloskey, D. N. 2008. The cult of statistical significance: How the standard error costs us jobs, justice, and lives. Ann Arbor: University of Michigan Press.Google Scholar
Zuckerman, H. 1988. The sociology of science. In Smelser, N. J. (Ed.), Handbook of sociology: 511574. Newbury Park, CA: Sage Publications.Google Scholar