Hostname: page-component-848d4c4894-xm8r8 Total loading time: 0 Render date: 2024-06-21T12:32:21.007Z Has data issue: false hasContentIssue false

Opening a “closed door”: A call for nuance in discussions of open science

Published online by Cambridge University Press:  27 January 2023

Jenelle A. Morgan*
Affiliation:
University of Calgary, Calgary, Alberta, Canada
Brittany L. Lindsay
Affiliation:
University of Calgary, Calgary, Alberta, Canada
Chelsea Moran
Affiliation:
University of Calgary, Calgary, Alberta, Canada
*
*Corresponding author: Email: jenelle.morgan@ucalgary.ca
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

Open science encourages researchers to consider how they can enhance the transparency and accessibility of their studies for the benefit of the scientific and the public communities (van der Zee & Reich, Reference van der Zee and Reich2018). Discussions citing the potential for the open science movement to harm scientific advancement, like Guzzo et al., (Reference Guzzo, Schneider and Nalbantian2022), often conflate their critiques of open science practices with the inherent value of its underlying philosophy. In reality, the philosophy of open science can be enacted in various ways. As trainees navigating the evolving requirements of what “credible” and “valid” research entails, we find it exciting that psychology researchers are having conversations about innovative methods that enhance the trustworthiness, robustness, and accessibility of research efforts. This perspective differs from Guzzo et al.’s position that open science is incompatible with many aspects that drive scientific advancement, especially when it comes to applied research. We noticed that each presumed incompatibility highlighted by the authors characterized extreme implementations of open science practices. By taking an “all or nothing” approach, the underlying philosophy of open science is minimized. Our position is that the philosophy of open science allows more flexibility, and that the characterizations of two open science practices—preregistration and open data sharing—were misrepresented by the examples provided by Guzzo et al. Consequently, this response will focus on (a) establishing that open science philosophy does not dictate an all or nothing approach, (b) rebutting the incompatibility points that research is regressed by overidentifying with the hypothetico-deductive method and data sharing, and (c) highlighting select benefits of open science practices.

The philosophy of open science: Not all or nothing

The notion of open science is meant to challenge researchers to contemplate how their studies can be refined through communal efforts and alleviate the knowledge resource gaps within the research community. In our view, open science is at the crux of important questions facing the field: How can we hold each other accountable to enhance the rigor of our scientific practice? Which group’s perspectives are captured and reflected in research outputs, and whose are neglected? Who should be able to access research products—at all phases of research, including but not limited to dissemination of results? Any study that attempts to advance our understanding of these questions by upholding principles of transparency and accessibility is, in our view, embodying the philosophy of open science.

Guzzo et al. presented this paradigm as though it requires a commitment to all its methods, when in reality, there is nuance in how the philosophy of open science is, or can be, enacted. Preregistration is but one way to reflect transparency, and data sharing is only one path to depicting accessibility. Thus, a study that strives to detail its methods, so that other researchers can leverage these techniques, but is not able to disclose sensitive data, is not precluded from reflecting the principles of open science. Or for early career researchers who are not able to afford open access fees, there are options to preprint their study at no cost. In essence, a study can comply with the underlying philosophy of open science without applying the entire repository of open science practices. Likewise, any given open science practice can be applied in different ways that are appropriate for a specific study or research context. In other words, we do not agree that efforts to engage in open science should be abandoned altogether if a particular practice does not suit a particular set of circumstances. Engagement in open science should be approached and evaluated using a flexible, rather than all or nothing, attitude.

Missed marks in the presumed incompatibilities of open science

For brevity, we mainly wish to address two presumed incompatibilities between open science and scientific advancement offered by Guzzo et al.: overidentifying with the hypothetico-deductive method and barriers to sharing and disclosure. We believe these have important implications for whether emerging and established researchers would consider engaging in open science practices.

Overidentifying with the hypothetico-deductive method

HARKing (i.e., hypothesizing after the results are known) is an intentional misrepresentation of one’s scientific process by portraying research as hypothetico-deductive, when in reality, the researcher either followed a more abductive approach or adjusted hypotheses to align with the findings of the study (Kerr, Reference Kerr1998). HARKing is not, as Guzzo et al. imply, a harmful label that is attributed to all forms of research that do not provide a priori hypotheses. HARKing cautions against a form of deception that co-opts the strengths of the hypothetico-deductive approach (i.e., designing a study based on a theory-informed research hypothesis) to misrepresent findings that were achieved through other means. Therefore, admonishing HARKing is not a villainization of other methods of research inquiry that do not intend to use a priori hypotheses as part of their method. Guzzo et al.’s arguments are misleading: Holding researchers who apply the hypothetico-deductive method accountable to HARKing does not preclude exploratory forms of research or qualitative research because, by definition, these forms of inquiry do not propose to be rejecting a null hypothesis. Being open that the study followed either an abductive or inductive line of inquiry is not HARKing but instead aligns with the principle of transparency in open science. How exploratory research is received in the scientific community is an issue to be taken up with journals or the culture of research in our field that prioritizes the hypothetico-deductive method (Ones et al., Reference Ones, Kaiser, Chamorro-Premuzic and Svensson2017), but not with open science.

These arguments about HARKing highlight the evolving complexity of exploratory work, where it is natural for new lines of inquiry to evolve during the research process. We agree that in the case of truly exploratory work, preregistration of research questions or hypotheses can be an unrealistic expectation. The purpose of preregistration within hypothetico-deductive studies is to enhance transparency and accountability to preset hypotheses. There are several ways in which exploratory work could meet these objectives without compromising the strengths of the method of inquiry. This includes self-disclosure of potential biases (e.g., reflexivity/subjectivity statements, see Lazard & McAvoy, Reference Lazard and McAvoy2020) or detailing the evolving nature of research questions and hypotheses within published research reports themselves. However, preregistration and exploratory research are not always mutually exclusive. Initial research questions, theoretical frameworks, and methodologies that will be selected because exploratory work is undertaken can be preregistered on platforms like the Open Science Framework (OSF; which accepts all email domains, not just.edu) and AsPredicted. For example, OSF has a preregistration template specifically for qualitative studies (see https://osf.io/j7ghv/). It is also important to note that preregistration does not have to be a single event, because additional research questions and hypotheses or changes to the study protocol can be appended, with rationale, as the study and knowledge evolves. As discussed by DeHaven (Reference DeHaven2017), preregistration is not meant to be a prison but rather a way to encourage transparent disclosure of decisions made throughout the scientific process that may ultimately influence study outcomes. Further, for researchers who are concerned that preregistering a study would increase the chances that their research ideas or unique methodology might get scooped (i.e., stolen), there is an option to hide preregistrations until public disclosure is appropriate. For example, AsPredicted specifically keeps preregistrations private (but connected with the original submission date) until all authors consent to make it public, which is recommended only after the manuscript is accepted for publication. Ultimately, preregistration is a useful tool that can be used to make a clearer distinction between data-driven work and confirmatory work (Nosek et.al., Reference Nosek, Ebersole, DeHaven and Mellor2018).

Sharing and disclosure

Research can also be made more accessible and accountable through sharing open data and materials. Although Guzzo et al. acknowledge that “some data sharing is preferable to none” (p.11), they proceed to make alarmist statements about how setting a standard for open data sharing would have led them to have “far fewer opportunities to contribute to science through peer-reviewed publications than [they] have had to date” (p.13) and that their “data-based publications would actually been reduced by at least 75%” (p.13). They imply that journals are, or will soon be, refusing to publish articles that are not accompanied with some form of open data or materials. That is unequivocally false. For instance, Guzzo et al. mentioned the Journal of Business Psychology’s (JBP) encouragement of data disclosure; however, JBP has not made this a requirement. With the exception of PLoS One (Houtkoop et al., Reference Houtkoop, Chambers, Macleod, Bishop, Nichols and Wagenmakers2018) and Archives of Scientific Psychology (Polanin & Williams, Reference Polanin and Williams2016), we are unaware of any other journals within or tangential to the psychological sciences that have made data sharing a stipulation for publication. Journals are instead primarily requiring a data disclosure statement to be assessed by editors during the peer-review stage, or for data to be available to peer reviewers, though this is still not widely practiced (Polanin & Williams, Reference Polanin and Williams2016). In fact, data sharing is quite an uncommon practice across several fields in psychology, as researchers cite fears of their data being misinterpreted or having difficulties with anonymizing the data (not exhaustive, see Houtkoop et al., Reference Houtkoop, Chambers, Macleod, Bishop, Nichols and Wagenmakers2018). We agree with Guzzo et al. that requiring all research inquiries, of any kind, to include shareable data to be considered for publication is a harmful practice that excludes studies that cannot meet this standard (e.g., sensitive research which involves equity deserving or vulnerable groups). But sharing data alongside published articles is far from being a widely implemented requirement and likely will not become one. Therefore, this is not a substantive rationale from Guzzo et al. for being concerned about the implications of this practice.

Further, we challenge all researchers to re-examine their assumptions about the feasibility of sharing their data or other study materials in some form. Although not all data can (or needs to be) be open, the open science philosophy of accessibility can be enacted in multiple ways. For example, this may include sharing resources that were developed or used in executing the study, providing detailed and supplementary materials about the methodology (e.g., copy of measures, vignettes), being available to answer queries about research practices, and engaging in knowledge mobilization efforts to ensure that research is directly accessible to the community it impacts. Thus, we encourage researchers and journals to (or continue to) use their discretion when considering the philosophy of open science and how accessibility can be appropriately enacted for the study in question.

The notion of sharing resources or data (or contrary, not doing so) does raise the question of equity within the scientific community. Guzzo et al. mentioned the effort research teams invest in developing materials, which is rendered futile if they share these resources to the scientific community. Thus, when we develop materials, such as surveys (or adapted measures), interview guides, and algorithms, there is an instinctual need to save these resources for our own future use and leave other researchers to develop these same tools of their own accord. Although we understand this perspective, we speculate whether knowledge can truly advance if marginalized communities (e.g., unfunded programs or labs) lack the resources to systematically scrutinize and build on the methods and findings of more privileged research groups. The ability to recruit and develop a research team is a privilege reserved for a select few, and desiring dynamic methods to address the same problem means allowing for equitable access to these resources. Of course, this is not always practical as studies may use propriety tools from organizations, but for researchers who have this volition, this is an important consideration for bridging the resource gap among scientists.

For those who agree with the premise of bridging the knowledge resource gap within the scientific community but are reluctant to share their materials, there are options to have these licensed. Bezjak et al. (Reference Bezjak, Conzett, Fernandes, Görögh, Helbig, Kramer, Labastida, Niemeyer, Psomopoulos, Ross-Hellauer, Schneider, Tennant, Verbakel and Clyburne-Sherin2018) created a resource that describes the types of licenses available, which would protect the intellectual property of the researcher’s work and require that they are credited when their tools are used by other scientists. There is also the option of simply providing direction directly on online repositories on how data and materials should be acknowledged. Or, as practiced by some researchers (see example from the INDIGO network regarding their stigma and discrimination scales: https://www.indigo-group.org/stigma-scales/), one could require researchers to fill out a form outlining the purpose behind their use of the data or materials, to keep track of who is using these tools and for what reason.

Getting around de-identifying data

Guzzo et al. mentioned the complexities that come with attempting to de-identify data, prior to data sharing, as it may still leave participants’ potentially exposed. In addition to Banks et al., (Reference Banks, Field, Oswald, O’Boyle, Landis, Rupp and Rogelberg2019) recommendation to omit personal identifiers (e.g., demographics) from the dataset, researchers could also simulate a dataset that has similar aggregate statistics as the original (i.e., means, standard deviations). This retains the statistical properties of the original data, allowing other researchers to run confirmation analyses if desired, but protects individuals’ responses and characteristics (see Quintana, Reference Quintana2020).

Benefits of open science in research

As Guzzo et al. expressed concerns regarding the implications of open science practices, we want to briefly address the benefits to individual researchers, as well as the research community in general. First, considering concerns around how this paradigm can protect scientists, researchers who choose to practice the philosophy of open science through preregistration also benefit from intellectual coverage. For instance, the second author recently encountered an issue during the revision process of one of their manuscripts where their choices were perceived as capitalizing on an analytical method post hoc; however, the decision to use a particular type of analysis was preregistered and justified a priori, protecting the author from the ethical issues that could arise from this misconception. Second, Guzzo et al. outline how open science regresses the advancement of science, especially as it pertains to conducting complex studies and analyses (e.g., meta-analyses). But, if researchers (who are able to) share their data around a specific research question, this creates the opportunity for conducting Individual Participant Data meta-analysis (Polanin & Williams, Reference Polanin and Williams2016). This is another robust way to examine new research questions or evaluate the replicability of these findings, using the raw data available, instead of the aggregate statistics used in a typical meta-analysis (Riley et al., Reference Riley, Lambert and Abo-Zaid2010).

Conclusion

It is important to understand that open science acts as a guide to how researchers can engage with the scientific community and general public through generating transparent and accessible research. Expecting that studies meet the objectives of all open science practices is a misrepresentation of what this paradigm advocates for. For emerging and current researchers wanting to implement these tools, the goal should be to acknowledge how their research embodies this philosophy in a manner that is appropriate for each study given the resources and expertise available. Abiding by this approach requires being nuanced in our thinking of open science, which is often more difficult in practice. We recommend that researchers of all career stages seek out opportunities to learn about these nuances before deciding that open science is “closing doors” and will “hold back the advancement of knowledge” (p.16). Through our involvement with the Open Science Student Support Group, a student-led community at the University of Calgary that aims to promote, normalize, and implement open and inclusive scientific research practices, we have felt supported in our journey to enact the philosophy of open science as suitable for our own research and for existing publishing norms. We invite the readers of this issue and commentary to join us in opening the doors to open science.

References

Banks, G. C., Field, J. G., Oswald, F. L., O’Boyle, E. H., Landis, R. S., Rupp, D. E., & Rogelberg, S. G. (2019). Answers to 18 questions about open science practices. Journal of Business Psychology, 34, 257270. https://doi.org/10.1007/s10869-018-9547-8 CrossRefGoogle Scholar
Bezjak, S., Conzett, P., Fernandes, P., Görögh, E., Helbig, K., Kramer, B., Labastida, I., Niemeyer, K., Psomopoulos, F., Ross-Hellauer, T., Schneider, R., Tennant, J., Verbakel, E. & Clyburne-Sherin, A. (2018). Open research data and materials. In Open Science Training Handbook (Eds). https://book.fosteropenscience.eu/en/ Google Scholar
DeHaven, A. (2017). Re: Preregistration: A plan, not a prison. Center for Open Science. https://www.cos.io/blog/preregistration-plan-not-prison Google Scholar
Guzzo, R. A., Schneider, B., & Nalbantian, H. R. (2022). Open science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology: Perspectives on Science and Practice, 15, 495515.CrossRefGoogle Scholar
Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V. M., Nichols, T. E., & Wagenmakers, E.-J. (2018). Data sharing in psychology: A survey on barriers and preconditions. Advances in Methods and Practices in Psychological Science, 1(1), 7085. https://doi.org/10.1177/2515245917751886 CrossRefGoogle Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. https://doi.org/10.1207/s15327957pspr0203_4 CrossRefGoogle ScholarPubMed
Lazard, L., & McAvoy, J. (2020). Doing reflexivity in psychological research: What’s the point? What’s the practice? Qualitative Research in Psychology, 17(2), 159177.Google Scholar
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 26002606.CrossRefGoogle ScholarPubMed
Ones, D. S., Kaiser, R. B., Chamorro-Premuzic, T., & Svensson, C. (2017). Has industrial-organizational psychology lost its way? Organizational Psychology, 7(2), 136136.Google Scholar
Polanin, J. R., & Williams, R. T. (2016). Overcoming obstacles in obtaining individual participant data for meta-analysis. Research Synthesis Methods, 7(3), 333341. https://doi.org/10.1002/jrsm.1208 CrossRefGoogle ScholarPubMed
Quintana, D.S. (2020) A synthetic dataset primer for the biobehavioural sciences to promote reproducibility and hypothesis generation. eLife. 9, e53275. https://doi.org/10.7554/eLife.53275 CrossRefGoogle ScholarPubMed
Riley, R. D., Lambert, P. C., & Abo-Zaid, G. (2010). Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ, 340, c221. https://doi.org/10.1136/bmj.c221 CrossRefGoogle ScholarPubMed
van der Zee, T., & Reich, J. (2018). Open education science. AERA Open, 4(3), 2332858418787466. https://doi.org/10.1177/2332858418787466 Google Scholar