Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-05-13T02:09:43.961Z Has data issue: false hasContentIssue false

Open science practices in IWO psychology: Urban legends, misconceptions, and a false dichotomy

Published online by Cambridge University Press:  27 January 2023

Joachim Hüffmeier*
Affiliation:
Department of Psychology, TU Dortmund University, Emil-Figge-Strasse 50, 44227 Dortmund, Germany
Ann-Kathrin Torka
Affiliation:
Department of Psychology, TU Dortmund University, Emil-Figge-Strasse 50, 44227 Dortmund, Germany
Elisabeth Jäckel
Affiliation:
Faculty of Economics and Business, University of Amsterdam, Plantage Muidergracht 12, 1001 NL Amsterdam, The Netherlands
Philipp Schäpers
Affiliation:
Department of Psychology, University of Münster, Fliednerstrasse 21, 48149 Münster, Germany
*
*Corresponding author: Email: joachim.hueffmeier@tu-dortmund.de
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

Although we appreciate Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) addressing the issue of open science practices (OSPs) and pointing out potential risks, we believe that their focal article is neither an accurate reflection of OSPs nor of the related perils. In our commentary, we highlight and elaborate the following four (partly) interrelated and problematic issues that run the risk of misrepresenting the usefulness of OSPs: (a) There are very good reasons why OSPs are currently discussed, although they are hardly mentioned by Guzzo and colleagues. The perils that the authors perceive are either (b) exaggerated and dramatized or (c) simply due to misconceptions related to OSPs on their part. (iv) Guzzo et al. see a dichotomy between different types of science within Industrial, Work, and Organizational (IWO) Psychology and suppose that the usefulness of OSPs is limited to only one of them (i.e., the hypothetico-deductive approach).

OSPs as a bad “new” key norm

In view of an article that intends to advocate for the value of open science, it was astonishing to see how little attention the authors devoted to illustrating how OSPs can, in fact, address current problems that plague many different literatures. These problems comprise (among others): (a) a widespread unwillingness by authors to share their data, (b) big difficulties to reproduce published findings even if the data are available, (c) an alarmingly high rate of confirmed hypotheses in the published literature that is most likely due to (d) the journals’ unwillingness to publish null findings and (e) resulting file-drawering of statistically nonsignificant findings, and (f) an underrepresentation of replication studies (Ryan & Tipu, 2022). Moreover, the way in which the authors describe these problems is rather misleading. For instance, there is a broad consensus in the literature that questionable research practices (QRPs; e.g., only reporting supported hypotheses) are not the same thing as plain fraud (e.g., Banks et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016). However, the authors equate these two and incorrectly describe that OSPs were developed to prevent fraud, although OSPs were in fact developed to reduce researchers’ use of QRPs.

However, more importantly for this commentary and deviating from Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022), we are not convinced that IWO Psychology is currently governed by norms that define “good” research. By contrast, OSPs have been repeatedly shown to have the potential to improve the currently problematic state. For instance, studies with high statistical power, methodological transparency, and preregistration were found to be highly replicable—a very desirable, but often not realized feature in psychology research (Protzko et al., 2020). Moreover, preregistered studies were found to be more transparent regarding the reporting of their findings than non-preregistered studies and exhibited a lower rate of confirmed hypotheses (Toth et al., Reference Toth, Banks, Mellor, O’Boyle, Dickson, Davis, DeHaven, Bochantin and Borns2020). As a last example, Open Science Badges were not only found to increase materials and data sharing. With Open Science Badges, the shared data were also more likely available and correct, as well as usable and complete (as compared to when authors only indicated data availability; Kidwell et al., Reference Kidwell, Lazarević, Baranski, Hardwicke, Piechowski, Falkenberg, Kennett, Slowik, Sonnleitner, Hess-Holden, Errington, Fiedler and Nosek2016). Altogether, when applying OSPs to IWO Psychology research, currently prevailing problematic research practices would change for the better.

There is a movement in the direction of transparency and full data sharing

Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) perceived a peril in terms of disclosure and sharing. First, we want to acknowledge that data sharing can sometimes be problematic or even impossible, especially in view of company data that would give competitors valuable insights or that allows the identification of single employees. Nonetheless, this problem has already been recognized for a long time and there is no lack of constructive ideas on how to address it (Banks et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016). In fact, Guzzo et al. themselves name a few of these measures (e.g., sharing only a relevant portion of a full data set or a correlation matrix).

It is, however, more important in the context of the current commentary, that there is clearly no visible movement in IWO Psychology in the direction of complete transparency and full data sharing as an absolute requirement. There are two crucial pieces of evidence that Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) neglect, but that clearly confirm our perspective. First, the Transparency and Openness Promotion (TOP) guidelines (Nosek et al., Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys and Yarkoni2015), which are used to design and evaluate journal policies regarding Open Science, very clearly articulate that even the two strictest levels of data transparency (Level 3 and Level 2) do not prescribe public data sharing (see Table 1, for the exact formulations of these levels). Importantly, even if data sharing would be required (, which is currently not the case, see Table 1), data would not have to be shared publicly, but they would have to be provided in the review process to allow for an independent reproduction of the reported analyses prior to publication. Similarly, if the second strictest transparency level were applied by a journal (, which is currently the case for only 29 of 240 Psychology and 6 of 40 Business & Management journals, see Table 1), researchers could always argue why they cannot or do not want to share (or upload) their data.

Table 1. Overview of the currently realized TOP standard “data transparency” for the journal categories “Business & Management” and “Psychology” (taken from the TOP Factor Website [https://topfactor.org/])

Note. Some journals are included in “Business & Management” and “Psychology.”

Second, data on how journals currently adopt these standards clearly reflect that most journals have not implemented any data sharing requirements at all so far. For instance, an analysis of the current implementation of data transparency requirements for “Psychology” and “Business and Management” journals using the TOP factor website (https://topfactor.org/) reveals that more than half of the journals in Psychology and about two-thirds of the journals in Business and Management do not have any data sharing-related requirements at all (see Table 1). Moreover, there is only one single journal—the journal Meta-Psychology—in Psychology (one out of 240) and no journal in Business and Management where data sharing is an absolute requirement. In our own current research, we found similar results specifically for IWO journals (Torka et al., Reference Torka, Mazei, Bosco, Cortina, Götz, Kepes, O’Boyle and Hüffmeier2022). Thus, the authors’ claims regarding data sharing appear to be exaggerated and their recommendation on how to solve this problem is already fully implemented in the TOP guidelines (Nosek et al., Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys and Yarkoni2015). To sum it up, it is neither the idea underlying OSPs to force researchers to engage in unwanted research practices, nor is there any movement in such a direction. There is also no competition between journals to achieve the strictest levels of transparency and openness. By contrast, data sharing requirements can help to alleviate the current lack of data sharing and related problems with reproducing reported findings in the research literature.

Replications as a driver of research with inadequate statistical power

When describing the paradox of replication, it appears as if Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) have (fundamental) misconceptions about OSPs. First, given the underrepresentation of replication studies (Ryan & Tipu, Reference Ryan and Tipu2022) that is accompanied by typically low statistical power in the original studies, we do not currently know which findings from IWO Psychology can be replicated at all and which findings cannot. There is little reason to assume that IWO Psychology has a higher replication rate than the rather low rates of its neighboring fields like Management, Marketing, or Economics. This is because the prevailing incentives and publishing practices are highly similar across these fields.

Second, even if all IWO Psychology findings were replicable, it would be necessary to conduct further independent and exact replication studies. These studies would allow controlling for sampling error in the original studies and for QRPs (e.g., reporting only supported hypotheses or arbitrarily excluding outliers to obtain significant results) and they would test whether findings are generalizable across different populations, companies, countries, and cultures. This is why replication studies typically find smaller effect sizes and provide more accurate effect size estimates than the original studies (e.g., Ebersole et al., Reference Ebersole, Mathur, Baranski, Bart-Plange, Buttrick, Chartier, Corker, Corley, Hartshorne, IJzerman, Lazarević, Rabagliati, Ropovik, Aczel, Aeschbach, Andrighetto, Arnal, Arrow, Babincak and Szecsi2020; Stewart & Shapiro, Reference Stewart and Shapiro2000). Thus, in contrast to the assertions of Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022), IWO Psychology does not only need conceptual, but also exact replications.

Third and perhaps most importantly, the authors assert that replications will encourage projects with small sample sizes (Guzzo et al., Reference Guzzo, Schneider and Nalbantian2022). However, since the beginning of the recent systematic replication efforts, it is a broad consensus and widely established that replication studies have to be adequately powered (e.g., Simonsohn, Reference Simonsohn2015). In line with this consensus and in contrast to the assertions by Guzzo et al., replication studies typically have much more statistical power than the original studies. This important feature not only applies to large-scale replications (e.g., Ebersole et al., Reference Ebersole, Mathur, Baranski, Bart-Plange, Buttrick, Chartier, Corker, Corley, Hartshorne, IJzerman, Lazarević, Rabagliati, Ropovik, Aczel, Aeschbach, Andrighetto, Arnal, Arrow, Babincak and Szecsi2020), but also regular replication studies (e.g., Stewart & Shapiro, Reference Stewart and Shapiro2000). Hence, given the higher power of replication studies, the assertion that replications run the risk of overestimating the magnitude of “true” relationships is simply wrong. In fact, the opposite is true: Well-powered replication studies often help to adequately assess the magnitude of “true” relationships, and they typically find (much) smaller effect sizes than the original studies (e.g., Ebersole et al., Reference Ebersole, Mathur, Baranski, Bart-Plange, Buttrick, Chartier, Corker, Corley, Hartshorne, IJzerman, Lazarević, Rabagliati, Ropovik, Aczel, Aeschbach, Andrighetto, Arnal, Arrow, Babincak and Szecsi2020; Stewart & Shapiro, Reference Stewart and Shapiro2000).

Preregistrations explicitly rely on a hypothetico-deductive approach

Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) postulate a pronounced dichotomy between hypothetico-deductive research and applied research (i.e., field research in organizations) regarding their respective adequacy in view of OSPs (e.g., regarding preregistrations). We believe that this dichotomous perspective is false for several reasons. First, preregistering a study does not limit the possibilities to engage in any type of research. In fact, it does not inhibit creating new explanations after data collection. It rather prevents presenting a post hoc finding as having been predicted a priori (i.e., Hypothesizing After the Results are Known [HARKing]). Moreover, not OSPs, but research practices like HARKing and further QRPs devalue exploratory studies because much too often it is not clear how researchers end up with the findings they report in their manuscripts. By contrast, preregistering a study simply helps (a) to distinguish between studies that develop versus test hypotheses or (b) to illustrate at which points of time (or in which analytical steps) hypotheses are developed and then tested when using the same data set for both activities. In fact, preregistrations were likened to a systematic research log that clarifies which findings are consistent with and which diverged from expectations (Haven et al., Reference Haven, Errington, Gleditsch, van Grootel, Jacobs, Kern, Piñeiro, Rosenblatt and Mokkink2020). Such a research log can build the needed trust in reported (exploratory) findings because the research process becomes more transparent.

Moreover, preregistrations can also accommodate innovative approaches, for instance analyzing big data from organizations, which Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022) describe as particularly problematic in their article. The different preregistration templates offered in the Open Science Framework (OSF),Footnote 1 the most prominent tool for preregistrations in psychology, and especially the template for analyzing preexisting data by Mertens and Krypotos (Reference Mertens and Krypotos2019) are known for their flexibility. We suspect that the authors’ wrong impression was guided by their apparently exclusive focus on the preregistration platform aspredicted.org and the related template, which is primarily tailored to the needs of experimental research.

Second, preregistrations are neither rigid nor “one-size-fit-all” devices that cannot be accommodated to own projects. Thus, there are various templates for all kinds of research approaches and methods that clearly show that preregistration is not limited to the hypothetico-deductive research approach. For instance, there are templates available for qualitative research (i.e., a research approach that is very often used for inductive projects; Haven et al., Reference Haven, Errington, Gleditsch, van Grootel, Jacobs, Kern, Piñeiro, Rosenblatt and Mokkink2020), the analysis of preexisting data (Mertens & Krypotos, Reference Mertens and Krypotos2019), or meta-analyses. Hence, we do not share the authors’ presented dichotomy of research approaches because preregistration comes in many different forms and shapes and can be handled very flexibly. In summary, we do not believe that OSPs would prevent any research in IWO Psychology because no one will be forced to apply OSPs that would harm a research project, are inconsistent with ethical guidelines, or would get a person/company in trouble. By contrast, we firmly believe that more transparency and openness have the potential to improve and strengthen current and future IWO Psychology research.

Footnotes

1 As a side note and as a reply to the authors’ observations (see Guzzo et al., Reference Guzzo, Schneider and Nalbantian2022, p. 44), it is possible to preregister a study on the OSF with email addresses that end in .com, also when using the aspreditced.org template (and further templates). It is furthermore possible to preregister studies when the data are already collected at the time of preregistration.

References

Banks, G. C., O’Boyle, E. H. Jr, Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A. A., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 520. https://doi.org/gf2k58 CrossRefGoogle Scholar
Ebersole, C. R., Mathur, M. B., Baranski, E., Bart-Plange, D. J., Buttrick, N. R., Chartier, C. R., Corker, K. S., Corley, M., Hartshorne, J. K., IJzerman, H., Lazarević, L. B., Rabagliati, H., Ropovik, I., Aczel, B., Aeschbach, L. F., Andrighetto, L., Arnal, J. D., Arrow, H., Babincak, P., … & Szecsi, P. (2020). Many Labs 5: Testing pre-data-collection peer review as an intervention to increase replicability. Advances in Methods and Practices in Psychological Science, 3(3), 309331. https://doi.org/gkqrqx CrossRefGoogle Scholar
Guzzo, R. A., Schneider, B., & Nalbantian, H. R. (2022). Open Science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology, 15(4), 495515.CrossRefGoogle Scholar
Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., Piñeiro, R., Rosenblatt, F., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19, 113. https://doi.org/gjzbw6 CrossRefGoogle Scholar
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L. S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T. M., Fiedler, S., & Nosek, B. A. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14(5), e1002456. https://doi.org/f8pkck Google ScholarPubMed
Mertens, G., & Krypotos, A.-M. (2019). Preregistration of analyses of preexisting data. Psychologica Belgica, 59(1), 338352. https://doi.org/ggdc45 CrossRefGoogle ScholarPubMed
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … & Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 14221425. https://doi.org/gcpzwn CrossRefGoogle ScholarPubMed
Protzko, J., Krosnick, J., Nelson, L. D., Nosek, B. A., Axt, J., Berent, M., Buttrick, N., DeBell, M., Ebersole, C. R., Lundmark, S., MacInnis, B., O’Donnell, M., Perfecto, H., Pustejovsky, J. E., Roeder, S., Walleczek, J., & Schooler, J. W. (2020). High replicability of newly-discovered social-behavioral findings is achievable. PsyArXiv https://doi.org/h5rt CrossRefGoogle Scholar
Ryan, J. C., & Tipu, S. A. (2022). Business and management research: Low instances of replication studies and a lack of author independence in replications. Research Policy, 51 (1), 104408. https://doi.org/gnk94x CrossRefGoogle Scholar
Simonsohn, U. (2015). Small telescopes: Detectability and the evaluation of replication results. Psychological Science, 26(5), 559569. https://doi.org/gfpn9t CrossRefGoogle ScholarPubMed
Stewart, M. M., & Shapiro, D. L. (2000). Selection based on merit versus demography: Implications across race and gender lines. Journal of Applied Psychology, 85(2), 219231. https://doi.org/bwv89b CrossRefGoogle ScholarPubMed
Torka, A.-K., Mazei, J., Bosco, F., Cortina, J., Götz, M., Kepes, S., O’Boyle, E., & Hüffmeier, J. (2022). Open Science Practices in management and industrial and organizational psychology: Not much talk, not much walk. Manuscript under journal review.Google Scholar
Toth, A. A., Banks, G. C., Mellor, D., O’Boyle, E. H., Dickson, A., Davis, D. J., DeHaven, A., Bochantin, J., & Borns, J. (2020). Study preregistration: An evaluation of a method for transparent reporting. Journal of Business and Psychology, 36, 553571. https://doi.org/ghcrqm CrossRefGoogle Scholar
Figure 0

Table 1. Overview of the currently realized TOP standard “data transparency” for the journal categories “Business & Management” and “Psychology” (taken from the TOP Factor Website [https://topfactor.org/])