Hostname: page-component-8448b6f56d-tj2md Total loading time: 0 Render date: 2024-04-24T18:09:47.076Z Has data issue: false hasContentIssue false

Spanish Scientists’ Opinion about Science and Researcher Behavior

Published online by Cambridge University Press:  05 February 2021

Dolores Frias-Navarro*
Affiliation:
Universitat de València (Spain)
Marcos Pascual-Soler
Affiliation:
ESIC Business & Marketing School (Spain)
José Perezgonzalez
Affiliation:
Massey University (New Zealand)
Héctor Monterde-i-Bort
Affiliation:
Universitat de València (Spain)
Juan Pascual-Llobell
Affiliation:
Universitat de València (Spain)
*
Correspondence concerning this article should be addressed to Dolores Frias-Navarro. Universitat de València. Departament de Metodologia de les Ciències del Comportament. Valencia (Spain). E-mail: M.Dolores.Frias@uv.es
Rights & Permissions [Opens in a new window]

Abstract

We surveyed 348 Psychology and Education researchers within Spain, on issues such as their perception of a crisis in Science, their confidence in the quality of published results, and the use of questionable research practices (QRP). Their perceptions regarding pressure to publish and academic competition were also collected. The results indicate that a large proportion of the sample of Spanish academics think there is a crisis in Science, mainly due to a lack of economic investment, and doubts the quality of published findings. They also feel strong pressure to publish in high impact factor journals and a highly competitive work climate.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Universidad Complutense de Madrid and Colegio Oficial de Psicólogos de Madrid

The analysis of questionable research practices is a topic that has aroused considerable interest since the beginning of the 21st century, due to its link to the controversy of the so-called crisis in Science, a controversy that is directly related to the debate on the lack of replication of published findings (Baker, Reference Baker2016; Benjamin et al., Reference Benjamin, Berger, Johannesson, Nosek, Wagenmakers, Berk, Bollen, Brembs, Brown, Camerer, Cesarini, Chambers, Clyde, Cook, De Boeck, Dienes, Dreber, Easwaran, Eferson and Johnson2017; Fanelli, Reference Fanelli2018; Frias-Navarro et al., Reference Frias-Navarro, Pascual-Llobell, Pascual-Soler, Perezgonzalez and Berrios-Riquelme2020; Ioannidis, Reference Ioannidis2005, Reference Ioannidis2019; Kerr, Reference Kerr1998).

The debate on the crisis in Science itself probably dates back to classical discussions related to publication bias, the use and abuse of statistical significance tests, and the lack of philosophical understanding of the statistical inference process (see revisions by Giner-Sorolla, Reference Giner-Sorolla2018; Llobell et al., Reference Llobell, Perez and Navarro2000); Monterde i Bort et al., Reference Monterde i Bort, Llobell and Navarro2006). Indeed, Altman (Reference Altman1994) states that “we need less research, better research, and research done for the right reasons”, and points out that it is common for researchers to use the wrong techniques, employ the right techniques in the wrong way, misinterpret results, report results selectively, cite the literature selectively, and express conclusions that are not justified by the findings. Consequently, the literature is full of articles with methodological weaknesses, inappropriate designs, and incorrect analytical methods. Although time has passed, the issues Altman mentioned are still present in the conduct of today's researchers, a problem that contaminates the foundations of Science as a source of rigor and quality.

In addition, lack of training in research design and statistical analysis is a key element that can lead researchers to make irresponsible decisions (Altman, Reference Altman1994; Frias-Navarro et al., Reference Frias-Navarro, Pascual-Llobell, Pascual-Soler, Perezgonzalez and Berrios-Riquelme2020). Determining why the researcher carries out questionable research practices involves taking into account a set of variables linked to the researcher (personality trait, dysfunctional personality, moral attitudes…), the social and institutional context of scientific practice (system of awarding funding to research groups, competitiveness, pressure to publish, reinforcement mechanisms to promote one’s work, emphasis on publishing statistically significant results…), and the methodological training received. In short, this is a multi-causal problem (Bouter, Reference Bouter2015).

Questionable research practices (QRPs) are those which, consciously or unconsciously, “massage” the attractiveness of a finding, increasing the prospects of a scientific publication and future citations (John et al., Reference John, Loewenstein and Prelec2012; Simmons et al., Reference Simmons, Nelson and Simonsohn2011). Given that researchers have a certain degree of flexibility throughout the research design process, some of these decisions may be directed at making the finding match the researcher’s wish (e.g., a statistically significant result in support of a hypothesis or a non-significant one supporting the assumptions of a statistical procedure), thus increasing the likelihood that a paper with this result will be published (Blanco et al., Reference Blanco, Perales and Vadillo2017; Matthes et al., Reference Matthes, Marquart, Naderer, Arendt, Schmuck and Adam2015). Such practices may occur with or without intent to deceive (Banks, O’Boyle, et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016; Banks, Rogelberg, et al., Reference Banks, Rogelberg, Woznyj, Landis and Rupp2016). Nor are all practices occurring at the time a researcher analyses data questionable. Indeed, Banks, Rogelberg, et al. (Reference Banks, Rogelberg, Woznyj, Landis and Rupp2016) differentiate between practices that pose no problems, practices that imply suboptimal usage but are not overly problematic, and QRPs that pose a serious threat to the inferences made based on the results reported.

Questionable research practices motivate researchers to make decisions designed to achieve desirable results in their studies, leading to p-hacking (forcing the data until they are statistically significant), harking (elaborating hypotheses after the findings are known), sharking (removing hypotheses after the findings are known), and cherry-picking (e.g., reporting only findings that confirm the researcher's hypotheses, or fit indices in structural equation modeling) (Hollenbeck & Wright, Reference Hollenbeck and Wright2017; Rubin, Reference Rubin2017). QRPs differ from scientific fraud insofar QRPs do not fabricate or falsify data for the purpose of publishing fictitious results (Wells & Farthing, Reference Wells and Farthing2019). They are questionable simply because they distort the data in order to support the researcher’s goals. In brief, they are “design, analytic, or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favor of an assertion” (Banks, O’Boyle, et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016, p. 7).

On the other hand, some actions artificially increase the scientist's production or impact on the literature, such as self-plagiarism, “salami slicing” (segmenting the results of a study in order to produce several publications); honorary authorships (including those based on the hope of reciprocal authorship in future publications), ‘ghost’ authorship, and excessive use of self-citations oftentimes not relevant to the research (Ding et al., Reference Ding, Nguyen, Gebel, Bauman and Bero2020; Hollenbeck & Wright, Reference Hollenbeck and Wright2017).

Fanelli's (Reference Fanelli2009) systematic review of scientific research misconduct presents prevalence data obtained from 18 surveys with international participants, primarily from the biomedical field. The results indicate that up to 14% of researchers believe that scientists fabricate or falsify data, and up to 34% admit that they have performed some questionable research practices, including making changes in design or results in response to pressures from a funding source. In the case of dishonest conduct by colleagues, up to 72% of the respondents knew of a colleague who had carried out such behaviors. Regarding authorship, Kennedy and Barnsteiner (Reference Kennedy and Barnsteiner2014) identify authorship problems in nursing journals, noting that 42% of the articles had honorary authors, and 27% had ghost authorships.

In conclusion, results of meta-research studies and knowledge about researchers' opinions of these questionable research practices are fundamental in addressing this kind of practices. Our study continues this line of work. In fact, this is the first study on this topic carried out in Spain with academic participants from the fields of Psychology and Education. Our main aim is to uncover the perception of questionable research practices by Spanish academics.

Method

Participants

The final sample is a convenience sample of 348 academics from Psychology and Education, with more women (53.4%) than men, between 23 and 69 years (M = 46.8, SD = 10.6, Mode = 52, Median = 48). They work mostly at a public university (91.7%) and in a permanent position (58.3%), with work experience ranging between months to 45 years (M = 16.0, SD = 10.75, Mode = 10, Median = 15). All respondents identified themselves as Spaniards.

The sample’s researcher profile is mostly doctors (85.6%), with moderate participation in the social dissemination of scientific results (60.6%) and publication in indexed journals—Web of Science, Journal Citation Reports (54.9%), as well as acting as peer-reviewers (69.5%), but with little leadership in publicly funded projects (88.5%) or editorship responsibilities, most being neither journal editor (85.1%) nor members or editorial teams (59.9%). As per their research itself, it is mostly of a quantitative orientation (57% of 242 participants), sometimes, but not always, exploring novel hypotheses (61.2% of 242 participants)Footnote 1 .

Instruments

We collated information using a questionnaire structured into eight sections:

  1. (i.) Sociodemographic variables.

  2. (ii.) Perception of a current crisis in Science.

  3. (iii.) Quality of academic research syllabus.

  4. (iv.) Confidence in the quality of published scientific results and researcher’s ethical behavior.

  5. (v.) Perception of questionable research practices.

  6. (vi.) Opinion regarding the publication of statistically significant results, confidence in research conclusions, and fallacies in the interpretation of statistically significant results.

  7. (vii.) Attitudes and beliefs regarding replication studies.

  8. (viii.) Perception of pressure to publish and competition in academic contextsFootnote 2 .

In regards to statistical fallacies, we asked about three particular fallacies. The “effect size fallacy” presumes that statistical significance informs about the size of the effect, so that small p-values equal large effects (Gliner et al., Reference Gliner, Vaske and Morgan2001; Kline, Reference Kline2013). The “clinical or practical significance fallacy” presumes that statistical significance signals clinical or practical significance. And the “finding utility fallacy” presumes that statistical significance signals the usability of the results (Gliner et al., Reference Gliner, Vaske and Morgan2001, Reference Gliner, Leech and Morgan2002; Kirk, Reference Kirk1996; Kline, Reference Kline2013).

Procedure

Participants were canvassed among academic staff listed on the web pages of Psychology and Education departments of Spanish universities, controlling for duplicate entries.

Between May 28, 2018 and June 11, 2018, 3,402 researchers were randomly selected and invited to participate in the study via their publicly available email. A first email notified them of the oncoming survey, including instructions and research objectives. A later email provided the link to the survey, managed via a Computer Assisted Web Interviewing (CAWI) system. A final reminder seven days later was sent to participants who had not accessed the survey in the interim.

A total of 545 surveys were completed (16.02%). The main retention criterion was for participants to have answered all survey items but for three optional questions. 348 participants fulfilled the required criterion; effectively lowering the response rate to 10.23% (242 participants also answered the optional questions).

Analysis were carried out with IBM’s SPSS v. 26 for Windows.

Results

Perception of Crisis in Science

63.5% of the sample (n = 221) perceives Science to be in crisis. Participants who perceived a crisis had the opportunity to provide their opinion on its causes as an open-ended response. Content analysis of the 144 responses provided by 100 participants (k = 144, n = 100) on the main causes attributed to such crisis resulted in two main causes identified: The lack of economic investment (k = 51) and an emphasis on quantity of publications over quality (k = 20). Overall, for this subset of participants the crisis is perceived as something exogenous to Science, rather than intrinsic to researchers’ individual behaviors or organizational and/or social factors (see Table 1).

Table 1. Open-ended Arguments to Explain the Crisis in Science (n = 100)

Note. Questions: “Do you currently think there is a crisis in Science” and “If your answer to the previous question was ‘Yes’, please indicate why you think this crisis in Science exists”.

Research Topics Addressed in University Research Syllabus

Most participants (51.2%) agree that research ethics is explicitly addressed in the course contents of Psychology and Education curricula, followed by meta-analysis, confidence intervals, effect sizes, and scientific misconduct (over 1/3 of participants agree). On the other hand, more than 90% of participants claim no to receive explicit teaching on problems related to the terms p-hacking, harking, cherry picking, and sharking, all questionable research practices (however, some 10% to 18% of respondents may be aware of those topics from elsewhere, which results in some 75% to 87% of respondents being unaware of those particular problems) (Figure 1).

Figure 1. Research Questions Addressed in Course Syllabus

Note. Question: “Does your course program include any topics or do your classes deal with …”.

These results, however, may be due to respondents being unaware of the English nomenclature, as when asked about the practices in a more narrative manner (e.g., see Figure 2), responses seem to indicate they are more aware of such practices than otherwise claimed.

Figure 2. Questionable Research Practices

Note. Question: “Please honestly assess whether you believe that, in research practice, researchers engage in any of the following research behaviors”.

Confidence in the Quality of Published Findings and in Researchers’ Ethics

62.9% of participants express doubts regarding the quality of peer-reviews and 66.7% have doubts about the absence of errors in published studies (Table 2).

Table 2. Confidence in the Quality of Published Results

Note. Question: “Please rate each of the following issues in relation to your opinion about Science”. 1 = Do not agree; 2 = Somewhat agree; 3 = Agree; 4 = Strongly agree.

As for fraudulent behavior, participants have fewer doubts when they assess their own scientific integrity, that of their own team members and PhD students, and that of other researchers in their own institution. They have greater doubts when assessing the behavior of undergraduate students, followed by graduate students and researchers from other institutions (Table 3).

Table 3. Doubts about Fraud

Note. Question: “To what extent have you doubted the integrity (falsifying, inventing, adding, or removing data) of the research carried out by the following agents?” 1 = Do not agree; 2 = Somewhat agree; 3 = Agree; 4 = Strongly agree.

Questionable Research Practices

The study of fraudulent behavior indicates that only 5.8% of the sample strongly believes that there is fraud in Science. However, it should be noted that 30% indicate that there 'might' be fraud, and that 64.2% categorically state that there is no fraud (Figure 2).

The survey also asked about particular questionable research behaviors, especially those related to authorship, p-hacking, and harking (Figure 2). The practice of listing as co-authors researchers who have not worked on developing or carrying out the study, in exchange for reciprocal co-authorships elsewhere, stands out in first place, with 51.9% of participants strongly agreeing that researchers engage in this type of practice. In second place stands the practice of measuring several variables but only reporting those with statistically significant findings (37%). In third place stands harking (35.9%), that is, rewriting the introduction of the article to hypothesize an otherwise unexpected finding. In fourth and fifth places stand two behaviors that are clear examples of fraud because the study is intentionally manipulated by creating information the author does not have: Citing original studies that have not been read (i.e., fabrication of theoretical information or p-literature; 32.8%); and self-citation of articles that have little to do with the topic addressed in the study (falsification of information; 30.3%).

It should be noted that the less extreme 'possibly' response was the option most frequently chosen for most questionable practices, except for those regarding co-authorship ('yes' was the most frequent response) and data fraud ('no' was the most frequent response). For example, in regard to the behavior of rounding down the p-value to the alpha value (.05), the 'possibly' response is chosen more on this question than on all the other items on the survey (61.5%).

Opinion about Statistically Significant Results

Participants mostly agree (42.5%) or strongly agree (24.1%) that researchers only publish studies when they find statistically significant differences. They also mostly agree (27%) or strongly agree (36.2%) that journals are not interested in publishing statistically non-significant results. Yet they are less agreeable with statistical significance (or lack of) determining when to stop research, the conclusions reached, the level of confidence on the quality of the underlying research, or publication prospects (Table 4).

Table 4. Opinion about Statistically Significant Results

Note. 1= Do not agree; 2 = Somewhat agree; 3 = Agree; 4 = Strongly agree.

Regarding statistical fallacies linked to the interpretation of the p-value, only 32 academics (9.2%) “strongly disagree” with all three fallacies. Thus, the majority of the sample commit one of the three fallacies (agreeing somewhat to strongly), highlighting the opinion that a statistically significant finding is an important and useful result in practice.

Opinion about Replication Studies

Participants almost unanimously point out that replication studies are necessary for Science to advance (98% agree somewhat to strongly). In addition, they think that replication is necessary when findings from different studies are contradictory (96.6% agree somewhat to strongly) yet unnecessary when findings are unanimous (72.4%). Moreover, most do not agree with linking the need for replication to the positive or negative results of a previous study (Table 5).

Table 5. Attitudes and Beliefs about Replication Studies and the Novelty of Hypotheses

Note. 1 = Do not agree; 2 = Somewhat agree; 3 = Agree; 4 = Strongly agree.

With regard to conducting only novel studies (versus replication studies), most participants believe that the main objective of scientific journals is to publish novel findings (82.8% agree somewhat to strongly), and that science advances more with studies that have novel hypotheses than with studies that replicate other research (69.49% agree somewhat to strongly), which correlates with an earlier tendency for most participants to carry out studies with novel hypotheses (see ‘Participants’ section).

Pressure to Publish and Academic Competition

Finally, participants also report high levels of academic pressure and competition (Table 6).

Table 6. Perception of Pressure to Publish and Academic Competition

Note. Likert scale with 11 anchors, running from 0 = No pressure/no competition to 10 = Very strong pressure/competition.

Discussion

The study of questionable research practices can be framed in the area of scientific integrity and ethics, within a climate of perverse and hyper-competitive incentives, which Edwards and Roy (Reference Edwards and Roy2017) describe as a corrupt academic culture. Such practices are equally related to problems of statistical comprehension and data interpretation (Badenes-Ribera et al., Reference Badenes-Ribera, Frias-Navarro, Monterde-i-Bort and Pascual-Soler2015). As Nosek et al. (Reference Nosek, Spies and Motyl2012) point out, the professional success of an academic scientist depends on publication, and publication standards support novel and positive results, thus generating incentives that skew publications and, at the same time, the researcher’s conduct. Our results indicate that slightly more than two-thirds of the academics surveyed (63.5%) express doubts about the quality of published findings. The results on the perception of questionable research practices show that some academics (51.9%) are particularly concerned about false authorship because increased competition is coupled with fraud, which inflates the curriculum vitae of someone who might be a rival in Academia. A surprisingly small percentage of respondents categorically state that the questionable behaviors analyzed do not occur among researchers.

The overall picture we gain from our results is that most respondents believe that researchers only publish statistically significant results (93.1%) and that science advances most when novel hypotheses are proposed (77.6%), that scientific journals are not interested in publishing null results (84.5%) but in publishing novel findings (82.8%), that replication studies are necessary when the published findings are contradictory (96.6%) but less so if the findings in the literature are unanimous (72.4%). In addition, over 50% of the respondents misinterpret the meaning of a statistically significant result and associate it with importance, the usefulness of the finding, and the size of the effect (Krueger & Heck, Reference Krueger and Heck2019). And while they may not agree with keeping a statistically non-significant result in the drawer (47.4%), they do not consider it a priority to publish these findings (66.4%). In brief, the majority of respondents say that a scientific conclusion (73.6%) should be based on whether or not the p-value is statistically significant, and, as readers, they have more confidence in the quality of the study whenever the results are statistically significant (77.8%).

It should be noted that our research measured academics' perceptions of researchers’ conduct in general and not the behaviors themselves. We felt that it was more useful to pose the questions in this way in order to avoid the inherent bias of assessing or drawing attention to the researcher's own questionable research practices. If we observe the results related to doubts about the researcher’s integrity and fraud, we can see that the majority of the researchers do not doubt their own conduct (80.7%, although it should be noted that 19.3% doubt their own research ethics to some degree) or those of their collaborators (76.7%), focusing their greatest doubts on the practices of the rest of researchers. However, because we did not ask respondents whether they had engaged in QRPs themselves, their answers may be more of a reflection on researchers’ degrees of freedom than on scientific fraud proper. Furthermore, some 64% of respondents did not perceive falsification or fabrication of data as occurring, despite well-known exemplar cases such as Diederik Stapel’s (Stroebe et al., Reference Stroebe, Postmes and Spears2012). However, it is possible that researchers were responding whether they perceived falsification or fabrication of data as routine procedure in current practice, as opposed to awareness of such practices having occurred in the past (thus, about 64% don't perceive that data falsification or fabrication is common in current practice, irrespective of whether it has occurred in the past or not.)

From an individual point of view, the number of publications influences recruitment decisions, salary, academic promotion, professional recognition, and the likelihood of obtaining a grant. For universities and departments, the number of publications by their academics is also relevant in their classification in international rankings (Ball, Reference Ball2005; Nosek et al., Reference Nosek, Spies and Motyl2012). Governmental resources for research funding are much less available than researchers would like, and the criteria for accessing stable work in academia are based almost exclusively on the quantitative metrics of impact factors. Moreover, the researcher’s excellence is measured using these same criteria, as can be seen in the public standards of Spanish universities. All this has favored a hyper-competitive academic environment, as reported by the academics who participated in our study. They feel highly pressured to publish following the norms of the criteria mentioned above.

In order to interpret the findings of our research, we believe it is necessary to take into account the context of the academic climate and culture (academic promotion of the scientist) as perceived by the researchers, a perception that has been verified in surveys carried out in other countries (e.g., Abbott et al., Reference Abbott, Cyranoski, Jones, Maher, Schiermeier and van Noorden2010; Fanelli, Reference Fanelli2010; John et al., Reference John, Loewenstein and Prelec2012). Pressure to publish and high competitiveness are two variables that could largely explain why researchers' behaviors become questionable. In addition, journals and their emphasis on novel and positive results (as opposed to replication studies and null results) encourage these questionable behaviors directed at obtaining results that have a high probability of being published (Fanelli, Reference Fanelli2012). This leads to 'adjusting' certain aspects of the design, as well as carrying out other behaviors that may alter and improve the researcher's metrics, favored by the degrees of freedom of the researcher's conduct (Neuliep & Crandall, Reference Neuliep and Crandall1990). Certainly, actions such as pre-registration of research and publication of protocols, along with the promotion of open science and the transparency of the research design process, are essential in order to control certain questionable research practices, but the researcher and their personal needs will always lie behind these actions (Chambers, Reference Chambers2019; Nosek et al., Reference Nosek, Spies and Motyl2012).

The results of our research indicate that in answer to the question “Is there a crisis in Science?”, approximately two-thirds of the academics surveyed think there is, and they attribute it mainly to a lack of economic investment, followed by the opinion that the quantity of publications takes precedence over its quality. If it is perceived that there are few economic resources and that the system values quantity more than quality, then the direct consequence perceived by the researcher is to ‘publish or perish’. Because this involves publishing a lot, and the perception is that there is a greater chance of publishing new and statistically significant results, the researcher’s aim is to carry out research that meets those characteristics. The current research culture, which has been developing for decades (Melton, Reference Melton1962; Sterling et al., Reference Sterling, Rosenbaum and Weinkam1995), must change in order to change the researcher's behavior.

Our findings are in line with Baker's (Reference Baker2016) results on confidence in scientific data, but we found more pessimistic opinions. As Baker points out, the area of research is a variable to take into account because, for example, physicists and chemists tend to show more confidence. The results of Baker's survey (Reference Baker2016) also indicate that more than 60% of respondents believe that pressure to publish and selective reporting are the two main factors behind the crisis in Science and the lack of replication, along with the little research being done for replication purposes.

In light of this situation, we believe that support from institutions and funding agencies is essential and indispensable for changing researchers’ behavior, along with journal policies and peer review, which must exercise their criteria by analyzing the validity of the results and the quality of the research design process, ignoring any issues not directly related to the scientific method. Publish or perish cannot be a criterion that justifies the researcher’s behavior, but a change in incentives is essential as a motivating element for the scientist looking for a job. Certainly, it is difficult to measure scientific performance, which becomes easier when counting the number of articles published, the impact factor of the journals, the number of citations received, the researcher's h index, or the amount of money received from project grants. However, the quality of scientific work is not related to these numbers. It is essential to assess the quality of the evidence provided by the results, and to do so, it is necessary to read the work and check the key elements that contradict the different dimensions of its validity. The data themselves are not the most important aspect, but rather the procedure through which these data were obtained. And this type of assessment, directly aimed at the quality of the scientific method, is a means of improving the quality of science. It focuses on obtaining reliable and valid results that can be published in journals based on the quality of their contribution to scientific knowledge, regardless of whether the result was positive or not. To carry out these types of actions, checklist tools (CONSORT, STROBE, PRISMA…) are quite useful. They require the user (authors, reviewers, editors, or readers) to have methodological knowledge about all the elements being verified because their content tracks the entire process of the scientific method.

One of the most important limitations of our study is the type of sample used. It is a self-selected sample, thus a self-selection bias among respondents cannot be discarded (Bruton et al., Reference Bruton, Brown and Sacco2020). As Baker (Reference Baker2016) points out, it is likely that the respondents were academics concerned about the quality of scientific findings. Furthermore, those researchers with more confidence in their own research practices may be the ones responding, in which case the findings may as well underestimate the current rate of QRPs (Fraser et al., Reference Fraser, Parker, Nakagawa, Barnett and Fidler2018). Indeed, Banks, Rogelberg, et al. (Reference Banks, Rogelberg, Woznyj, Landis and Rupp2016) point out that one of the more problematic concerns may be the underreporting of QRP engagement.

The low response rate (10.23%) is another limitation because it might affect the representativeness of the sample and, consequently, the generalizability of the results. This response rate is similar to what was obtained by other researchers who used the same data collection system with academics (via email): 7% in Bruton et al. (Reference Bruton, Brown and Sacco2020), 10.26 % in Badenes-Ribera et al. (Reference Badenes-Ribera, Frias-Navarro, Monterde-i-Bort and Pascual-Soler2015), 10.58 % in Badenes-Ribera, et al. (Reference Badenes-Ribera, Frias-Navarro, Pascual-Soler and Monterde-i-Bort2016), or 15% in Fraser et al. (Reference Fraser, Parker, Nakagawa, Barnett and Fidler2018). It is also convenient to point out that the interest was on assessing degree of agreement, thus the “biased” scale used (with anchors 1 = Do not agree, 2 = Somewhat agree, 3 = Agree, 4 = Strongly agree) allowed to assess such agreement while, at the same time, to focus on the extremes of the scale when interpreting the results).

The crisis of Science has been studied from different perspectives, including economic (e.g., funding constraints, or curriculum building directed towards tenure), failures of replication and credibility, failures in methodological training, and even the degree of social impact of research findings. Our study pertains to the line of research developed in the past decade that has extensively and profoundly reflected on the crisis in Science, questionable research practices, and the need for researchers’ statistical re-education. Our study is the first to measure the opinions of Spanish Psychology and Education academics about researchers’ behavior and the quality of scientific results. Our findings reflect on ethical behavior because, as Baker (Reference Baker2016) points out, it is healthy for the scientific community to be aware of the problems that surround publication in order to remedy them and provoke changes in researchers’ behavior. We fully agree with the recommendations of Dorothy Bishop (Reference Bishop2020) and the need to “understand the mechanisms that maintain bad practices in individual humans” in order to understand “why individual scientists mistake bad science for good, and helping them to resist these errors”. Approaches to human cognitive biases are not new. For example, in Reference Mahoney1976, Mahoney pointed out reviewers’ bias toward their favorite ideas. Thus, advancing knowledge about confirmation bias, the degree of morality attributed to errors of omission and commission, statistical fallacies, and the lack of understanding of the concept of conditional probability, which involves the use of the p-value (with a key role in planning statistical power and the analysis and interpretation of the data), can improve research practices (Bishop, Reference Bishop2020).

We thank to all the participants who answered the survey.

Footnotes

Conflicts of Interest: None.

Funding Statement: This work was supported by the University of Valencia (Spain) (Grant Number UV‐INV‐AE17‐698616).

1 Descriptions are available open access on https://osf.io/kgvq8

2 Survey, data, and coding are available open access on https://osf.io/kgvq8

References

Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & van Noorden, R. (2010). Metrics: Do metrics matter? Nature, 465, 860862. http://doi.org/10.1038/465860a CrossRefGoogle ScholarPubMed
Altman, D. (1994). The scandal of poor medical research. BMJ, 308, 283284. http://doi.org/10.1136/bmj.308.6924.283 CrossRefGoogle ScholarPubMed
Badenes-Ribera, L., Frias-Navarro, D., Monterde-i-Bort, H., & Pascual-Soler, M. (2015). Interpretation of the p-value. A national survey study in academic psychologists from Spain. Psicothema, 27(3), 290295. http://doi.org/10.7334/psicothema2014.283 Google ScholarPubMed
Badenes-Ribera, L., Frias-Navarro, D., Pascual-Soler, M., & Monterde-i-Bort, H. (2016). Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists. Psicothema, 28(4), 448456. http://doi.org/10.7334/psicothema2016.24 Google ScholarPubMed
Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, 533, 452454. http://doi.org/10.1038/533452a CrossRefGoogle ScholarPubMed
Ball, P. (2005). Index aims for fair ranking of scientists. Nature, 436, 900. http://doi.org/10.1038/436900a CrossRefGoogle ScholarPubMed
Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A., & Adkins, C. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42, 520. https://doi.org/10.1177/0149206315619011 CrossRefGoogle Scholar
Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. (2016). Editorial: Evidence on questionable research practices: The good, the bad, and the ugly. Journal of Bussines and Psychology, 31, 323338. https://doi.org/10.1007/s10869-016-9456-7 CrossRefGoogle Scholar
Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., Bollen, K. A., Brembs, B., Brown, L., Camerer, C., Cesarini, D., Chambers, C. D., Clyde, M., Cook, T. D., De Boeck, P., Dienes, Z., Dreber, A., Easwaran, K., Eferson, C., … Johnson, V. E. (2017). Redefine statistical significance. Nature Human Behaviour, 2, 610. http://doi.org/10.1038/s41562-017-0189-z CrossRefGoogle Scholar
Bishop, D. V. M. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 119. http://doi.org/10.1177/1747021819886519 CrossRefGoogle ScholarPubMed
Blanco, F., Perales, J. C., & Vadillo, A. (2017). Can Psychology rescue itself? Incentives, biases, and reproducibility. Anuari de Psicologia de la Societat Valenciana de Psicología, 18, 231252.Google Scholar
Bouter, L. M. (2015). Commentary: Perverse incentives or rotten apples? Accountability in Research, 22(3), 148161. http://doi.org/10.1080/08989621.2014.950253 CrossRefGoogle ScholarPubMed
Bruton, S. V., Brown, M., & Sacco, D. F. (2020). Ethical consistency and experience: An attempt to influence researcher attitudes toward questionable research practices through reading prompts. Journal of Empirical Research on Human Research Ethics, 15(3), 216226. https://doi.org/10.1177/1556264619894435 CrossRefGoogle ScholarPubMed
Chambers, C. (2019). What’s next for registered reports? [Comment]. Nature, 573, 187189. http://doi.org/10.1038/d41586-019-02674-6 CrossRefGoogle Scholar
Ding, D., Nguyen, B., Gebel, K., Bauman, A., & Bero, L. (2020). Duplicate and salami publication: A prevalence study of journal policies. International Journal of Epidemiology, 49(1), 281288. http://doi.org/10.1093/ije/dyz187 CrossRefGoogle ScholarPubMed
Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 5161. http://doi.org/10.1089/ees.2016.0223 CrossRefGoogle Scholar
Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4, Article e5738. http://doi.org/10.1371/journal.pone.0005738 CrossRefGoogle ScholarPubMed
Fanelli, D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US States Data. PLOS ONE, 5(4), Article e10271. http://doi.org/10.1371/journal.pone.0010271 CrossRefGoogle ScholarPubMed
Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90, 891904. http://doi.org/10.1007/s11192-011-0494-7 CrossRefGoogle Scholar
Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences of the United States of America (PNAS), 115(11), 26282631. http://doi.org/10.1073/pnas.1708272114 CrossRefGoogle ScholarPubMed
Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLoS ONE, 13, Article e0200303. https://doi.org/10.1371/journal.pone.0200303 CrossRefGoogle ScholarPubMed
Frias-Navarro, D., Pascual-Llobell, J., Pascual-Soler, M., Perezgonzalez, J., & Berrios-Riquelme, J. (2020). Replication crisis or an opportunity to improve scientific production? European Journal of Education, 55, 618-631. https://doi.org/10.1111/ejed.12417 CrossRefGoogle Scholar
Giner-Sorolla, R. (2018). From crisis of evidence to a “crisis” of relevance? Incentive-based answers for social psychology’s perennial relevance worries. European Review of Social Psychology, 30(1), 138. http://doi.org/10.1080/10463283.2018.1542902 CrossRefGoogle Scholar
Gliner, J. A., Leech, N. L., & Morgan, G. A. (2002). Problems with null hypothesis significance testing (NHST): What do the textbooks say? The Journal of Experimental Education, 71(1), 8392. https://doi.org/10.1080/00220970209602058 CrossRefGoogle Scholar
Gliner, J. A., Vaske, J. J., & Morgan, G. A. (2001). Null hypothesis significance testing: Effect size matters. Human Dimensions of Wildlife, 6(4), 291301. http://doi.org/10.1080/108712001753473966 CrossRefGoogle Scholar
Hollenbeck, J. R., & Wright, P. M. (2017). Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data. Journal of Management, 43(1), 518. https://doi.org/10.1177/0149206316679487 CrossRefGoogle Scholar
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), Article e124. https://doi.org/10.1371/journal.pmed.0020124 CrossRefGoogle ScholarPubMed
Ioannidis, J. P. A. (2019). What have we (not) learnt from millions of scientific papers with p values? The American Statistician, 73. (Suppl 1), 2025. http://doi.org/10.1080/00031305.2018.1447512 CrossRefGoogle Scholar
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science, 23(5), 524532. http://doi.org/10.1177/0956797611430953 CrossRefGoogle ScholarPubMed
Kennedy, M. S., & Barnsteiner, J. (2014). Honorary and ghost authorship in nursing publications. Journal of Nursing Scholarship, 46(6), 416422. http://doi.org/10.1111/jnu.12093 CrossRefGoogle ScholarPubMed
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. http://doi.org/10.1207/s15327957pspr0203_4 CrossRefGoogle ScholarPubMed
Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), 746759. https://doi.org/10.1177/0013164496056005002 CrossRefGoogle Scholar
Kline, R. B. (2013). Beyond significance testing: Statistic reform in the behavioral sciences. American Psychological Association.CrossRefGoogle Scholar
Krueger, J. I., & Heck, P. R. (2019). Putting the p-value in its place. The American Statistician, 73. (Suppl 1), 122128. http://doi.org/10.1080/00031305.2018.1470033 CrossRefGoogle Scholar
Llobell, J. P., Perez, J. F. G., & Navarro, M. D. F. (2000). Statistical significance and replicability of the data. Psicothema, 12 (Suppl. 2), 408412.Google Scholar
Mahoney, M. J. (1976). Scientist as subject: The psychological imperative. Ballinger.Google Scholar
Matthes, J., Marquart, F., Naderer, B., Arendt, F., Schmuck, D., & Adam, K. (2015). Questionable research practices in experimental communication research: A systematic analysis from 1980 to 2013. Communication Methods and Measures, 9(4), 193207. http://doi.org/10.1080/19312458.2015.1096334 CrossRefGoogle Scholar
Melton, A. W. (1962). Editorial. Journal of Experimental Psychology, 64(6), 553557. https://doi.org/10.1037/h0045549 CrossRefGoogle Scholar
Monterde i Bort, H. M. I., Llobell, J. P., & Navarro, M. D. F. (2006). Interpretation mistakes in statistical methods: Their importance and some recommendations. Psicothema, 18(4), 848856.Google ScholarPubMed
Neuliep, J. W., & Crandall, R. (1990). Editorial bias against replication research. Journal of Social Behavior & Personality, 5(4), 8590.Google Scholar
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615631. http://doi.org/10.1177/1745691612459058 CrossRefGoogle ScholarPubMed
Rubin, M. (2017). When does HARKing hurt? Identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Review of General Psychology, 21(4), 308320. http://doi.org/10.1037/gpr0000128 CrossRefGoogle Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. http://doi.org/10.1177/0956797611417632 CrossRefGoogle ScholarPubMed
Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited: The effect of the outcome of statistical tests on the decision to publish and vice versa. The American Statistician, 49(1), 108112. http://doi.org/10.1080/00031305.1995.10476125 Google Scholar
Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in Science. Perspectives on Psychological Science, 7(6), 670688. http://doi.org/10.1177/1745691612460687 CrossRefGoogle Scholar
Wells, F., & Farthing, M. (2019). Fraud and misconduct in biomedical research (4 th Ed.). Taylor & Francis Group. https://doi.org/10.1201/9780429073328 CrossRefGoogle Scholar
Figure 0

Table 1. Open-ended Arguments to Explain the Crisis in Science (n = 100)

Figure 1

Figure 1. Research Questions Addressed in Course SyllabusNote. Question: “Does your course program include any topics or do your classes deal with …”.

Figure 2

Figure 2. Questionable Research PracticesNote. Question: “Please honestly assess whether you believe that, in research practice, researchers engage in any of the following research behaviors”.

Figure 3

Table 2. Confidence in the Quality of Published Results

Figure 4

Table 3. Doubts about Fraud

Figure 5

Table 4. Opinion about Statistically Significant Results

Figure 6

Table 5. Attitudes and Beliefs about Replication Studies and the Novelty of Hypotheses

Figure 7

Table 6. Perception of Pressure to Publish and Academic Competition