Hostname: page-component-8448b6f56d-42gr6 Total loading time: 0 Render date: 2024-04-19T00:57:28.831Z Has data issue: false hasContentIssue false

A comparative cross-sectional assessment of statistical knowledge of faculty across five health science disciplines

Published online by Cambridge University Press:  14 July 2021

Matthew J. Hayat*
Affiliation:
School of Public Health & Byrdine F. Lewis College of Nursing and Health Professions, Georgia State University, Atlanta, GA, USA
Todd A. Schwartz
Affiliation:
Department of Biostatistics, Gillings School of Global Public Health & School of Nursing, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
MyoungJin Kim
Affiliation:
Mennonite College of Nursing, Illinois State University, Normal, IL, USA
Syeda Zahra Ali
Affiliation:
School of Public Health & Byrdine F. Lewis College of Nursing and Health Professions, Georgia State University, Atlanta, GA, USA
Michael R. Jiroutek
Affiliation:
Department of Clinical Research, Department of Pharmacy Practice (adjunct appointment), College of Pharmacy & Health Sciences, Campbell University, Buies Creek, NC, USA
*
Address for correspondence: M.J. Hayat, PhD, School of Public Health, Georgia State University, PO Box 3984, Atlanta, GA30302-3894, USA. Email: mhayat@gsu.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

The purpose of this study was to compare statistical knowledge of health science faculty across accredited schools of dentistry, medicine, nursing, pharmacy, and public health.

Methods:

A probability sample of schools was selected, and all faculty at each selected school were invited to participate in an online statistical knowledge assessment that covered fundamental topics including randomization, study design, statistical power, confidence intervals, multiple testing, standard error, regression outcome, and odds ratio.

Results:

A total of 708 faculty from 102 schools participated. The overall response rate was 6.5%. Most (94.2%) faculty reported reading the peer-reviewed health-related literature. Respondents answered 66.2% of questions correctly across all questions and disciplines. Public health had the highest performance (80.7%) and dentistry the lowest (53.3%).

Conclusions:

Knowledge of statistics is essential for critically evaluating evidence and understanding the health literature. These study results identify a gap in knowledge by educators tasked with training the next generation of health science professionals. Recommendations for addressing this gap are provided.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Introduction

Healthcare researchers and practitioners rely on the peer reviewed literature to understand the state of the science and practice in a health affairs discipline. More than 95% of publications in the health literature include reporting of descriptive statistics in table or graphical form, and more than 76% of health-related publications use classical inference including statistical tests and models [Reference Hayat, Powell, Johnson and Cadwell1]. To understand and implement published findings, healthcare researchers and practitioners must have the knowledge, skill, and ability to understand fundamental statistical concepts and methods, as well as to critically evaluate the legitimacy and appropriateness of a study’s design, analyses, interpretations, and conclusions.

Clinicians and scientists in a health science field typically learn to read and appraise the peer reviewed literature during their graduate programs. Health science faculty are tasked with teaching them how to do this. Graduate degree program curricula are usually tightly packed with didactic, practical, and clinical education and training to address competing demands for developing substantive, science, and clinical competency. These packed graduate curricula encompass lofty goals to keep up with trends toward having more integrated, multidisciplinary content to better prepare competent professionals for an evidence-based practice and professional orientation [Reference Schmidt2Reference Buja4].

Multiple studies across different health science disciplines have assessed clinicians’ statistical knowledge. We reviewed the scientific literature for studies assessing statistical knowledge in the primary health science fields of dentistry, medicine, nursing, pharmacy, and public health. For example, three studies that assessed medical resident and physician knowledge and ability to understand statistical statements in the evidence-based literature showed low levels of understanding [Reference Wulff, Andersen, Brandenhoff and Guttler5Reference Scheutz, Andersen and Wulff7]. In order to design graduate programs to include not only all key subject area material, but in addition the most relevant topics to create professionals able to read and understand the literature, knowing which statistical concepts and methods are used in the literature is paramount. Of note, applications of complex statistical methods from 1989 to 2005 in New England Journal of Medicine increased substantially [Reference Horton and Switzer8].

Additional studies in medicine have also quantified incorrect application and usage of statistical methods [Reference Pocock, Hughes and Lee9Reference Campbell and Grimshaw13]. No known assessments of statistical knowledge have been conducted in public health. Studies in the dental research literature have also documented erroneous application of statistical methods [Reference Tu, Kellett, Clerehugh and Gilthorpe14,Reference Fleming, Koletsi, Polychronopoulou, Eliades and Pandis15]. Several studies have also assessed statistical reporting errors in the nursing research literature [Reference Anthony16Reference Gaskin and Happell18]. Finally, statistical knowledge assessments of pharmacy residents and pharmacists have suggested a lack of adequate knowledge and understanding to read and comprehend the pharmacy literature [Reference Murphy, Slack, Boesen and Kirking19Reference Bookstaver, Miller, Felder, Tice, Norris and Sutton21].

The purpose of this study was to assess knowledge of fundamental statistical concepts among health science faculty. We focused our work on five primary health science disciplines and reviewed the types of statistical methods used in dentistry [Reference Kim, Kim and Hong22], medicine [Reference Colditz and Emerson23Reference Strasak, Zaman, Marinell, Pfeiffer and Ulmer28], nursing [Reference Gaskin and Happell18,Reference Zellner, Boerst and Tabb29Reference Malone and Coyne30], pharmacy [Reference Hokanson, Berina, Guernsey, Doutre’ and Bryant31,Reference Lee, Soin and Einarson32] and public health [Reference Hayat, Powell, Johnson and Cadwell1,Reference Pocock, Collier and Dandreo33Reference Arcila Quiceno, García Restrepo, Gómez Rúa, Montenegro Martínez and Silva Ayçaguer35]. Based on the methods published in those fields, we based our instrument on previously published work [Reference Wulff, Andersen, Brandenhoff and Guttler5,Reference Scheutz, Andersen and Wulff7] and focused it on key fundamental topic areas that are reported in peer reviewed journals in these respective fields. These study results provide needed information for assessing whether health science faculty are themselves adequately prepared to read and understand the literature. Since they, in turn, train health science graduate students in how to read and interpret the literature, these study findings should prove useful for identifying gaps in knowledge and training across the health science spectrum. More importantly, these data may be helpful in making informed data-based decisions about graduate health science education as well as continuing education and professional training for faculty members.

Results for the entire study sample representing responses from faculty in all five disciplines and a comparative analysis among disciplines are presented. Additional discipline-specific publications appear elsewhere for dentistry, nursing, and pharmacy [Reference Hayat, Kim, Schwartz and Jiroutek36Reference Hayat, Kim, Schwartz and Jiroutek38].

Methods

Survey Development

We developed an instrument that consisted of demographic items and a statistical knowledge assessment tool with eight questions about fundamental statistical methods and concepts typically covered in a core graduate-level biostatistics course (Supplement A). The eight statistical concepts included randomization, observational studies, statistical power, confidence intervals, multiple testing, standard error, regression outcomes, and odds ratios. Each multiple-choice knowledge item had 4 choices: 1 correct answer, 2 incorrect answers, and an opt-out option to help prevent guessing. Two additional questions pertained to respondents’ attitude toward and perceived importance of statistical concepts. The data for this paper were obtained using Qualtrics (Qualtrics, Provo, UT) software to administer the instrument in an anonymous fashion [39].

The assessment questions were developed in a careful and involved process by the authors, beginning with previously published work [Reference Wulff, Andersen, Brandenhoff and Guttler5,Reference Scheutz, Andersen and Wulff7]. The research team for this study was composed of four experienced senior biostatistics faculty members with extensive experience in teaching biostatistics. The items included in the assessment were selected through a multidimensional process. The largest focus was on statistical concepts and methods frequently reported in the health science literature. Common biostatistics concepts and methods typically not well understood by students that the four authors had observed through years of teaching statistics were also considered. While these topics have been thoroughly vetted through the years with our students, this is the first time we have presented these questions to faculty.

As content experts, our team of four biostatistics educators evaluated the construct validity of the items and judged the questionnaire to be assessing and measuring what we aimed to capture, namely, knowledge of fundamental statistical concepts. The content validity of the questionnaire is based on the studies we have cited from the literature of the five health science disciplines.

In addition, our process of development included consultation and input from two experienced health science faculty. Several changes were made to the questions based on their feedback, including reducing the number of responses for each knowledge question to three content-based responses. Despite our limited resources and lack of funding, we believe the research and development of the questionnaire has led to items that are straightforward and comprehensive in assessing the understanding of fundamental statistical concepts in terms of validity and reliability.

Target Population and Study Sample

The population for this study consisted of faculty members in accredited health science schools. Table 1 displays the accrediting body and number of schools for each discipline. A total of 537 schools were targeted, with medicine representing the largest pool consisting of 147 accredited medical schools, and dentistry with the smallest pool of 66 accredited dental schools.

Table 1. Target population

* Based on accredited doctoral programs listed on organization’s website in January 2017.

The study sample was obtained through probability sampling. Accredited institutions were selected using stratified random sampling with stratification by discipline. Schools within each discipline were randomly selected, and all faculty listed on each selected school’s website received an email invitation to participate. Schools were added on a rolling basis, and all faculty in each school sent two invitations. We evaluated the total target sample size every two weeks, and if this target was not achieved, additional schools were randomly selected and added, and all faculty in those schools invited. This process continued until the targeted faculty sample size for completed surveys within each discipline was reached. An a priori sample size calculation was based on a desired specified precision for the probability of guessing a correct answer. The probability of guessing the correct answer was based on a random guess of the three answer options for each question. We did not consider opting out of guessing in the denominator for the probability of guessing, resulting in a one-third chance of a correct guess. Applying a confidence interval half-width of 0.10, a precision estimation resulted in a minimum target sample size of 103 faculty for each discipline, or an overall minimum study sample size of 515 faculty.

Survey Administration

A Qualtrics survey form was developed [39]. Faculty were invited to participate in an anonymous cross-sectional survey between April and August 2017. All faculty listed on the school website in each randomly selected school were sent email invitations to participate in the study. This study was reviewed and classified as exempt by the Georgia State University and the University of North Carolina at Chapel Hill Institutional Review Boards.

Analysis

Data analyses were performed using the SAS Software System (SAS Institute, Cary, NC). Descriptive statistics were used to summarize all study measures, with mean and standard deviation for continuous variables and frequency distributions with counts and percentages for categorical variables. The main outcome in this study was the number (or percentage) of correct responses out of eight questions on the statistics knowledge assessment tool. Boxplots were constructed for number of correct responses by discipline. Statistics knowledge was scored as the total number of correct responses out of eight questions and summarized with percentages for both individual items and aggregated scores. Each question was equally weighted. Missing responses were counted as incorrect.

In addition, a general linear model was used to model the number of correct responses as a function of discipline and faculty characteristics. Model selection was done through a multitiered approach by evaluating the magnitude of each regression coefficient and assessing the contribution of each covariate to the explained variability (R2) of the outcome. We anticipated that faculty characteristics would differ across disciplines and so we tested for the moderating effects of discipline on the relationship between specific faculty characteristics and statistics knowledge. The use of correlation and collinearity statistics such as variance inflation factors and tolerance between predictors was used to test for multicollinearity.

Results

Table 2 displays the sampling characteristics and response rates for the study sample. A total of 10,931 faculty in 102 schools were invited to participate. School sizes varied considerably with respect to the number of faculty. Medicine had the largest average school size with a mean of 210.2 faculty per school, and pharmacy the smallest with an average of 67.1 faculty per school. We adjusted our estimate for the number of faculty invites by subtracting the number of returned emails and delivery failure notifications from the number of invites sent out. Response rates varied across disciplines, with dentistry and medicine having the lowest response rates (4.2% and 5.5% of faculty responding, respectively). Public health had the highest response rate of 8.6%. The total study sample consisted of an overall 6.5% response rate (708 faculty respondents/10,931 adjusted number of faculty invited).

Table 2. Summary of sampling and response across disciplines

* Adjusted for returned emails and delivery failure notifications.

Faculty Characteristics

A summary of faculty characteristics is provided in Table 3. Females accounted for the majority of faculty respondents in nursing (90.2%) and pharmacy (64.0%), whereas males were the majority in dentistry (63.3%) and medicine (61.5%). Public health was relatively balanced (48.0% females). Prior biostatistics education was assessed with a question about the number of statistics/biostatistics courses completed. The majority of respondents in public health (75.4%) and nursing (70.7%) reported completing three or more courses. The majority of faculty among all disciplines, with the exception of public health (49.2%), reported having completed none or one epidemiology courses. Almost half of pharmacy faculty (46.8%) and medicine faculty (48.7%) reported having completed none or one biostatistics course. Thirty percent of public health respondents reported teaching statistics/biostatistics, which was about twice as many as in the other disciplines.

Table 3. Summary of faculty characteristics (n = 708)

Importance and Attitude

In addition to the objective statistics knowledge assessment, we sought to understand how faculty rated the importance of statistics in the researcher role. Overall, 95.8% of faculty in the sample rated statistics as somewhat or very important. Most (94.2%) reported reading peer-reviewed health-related scientific journal articles. We also included a question about attitude toward understanding of fundamental statistical concepts. About half of dentistry faculty (48.6%) reported understanding all expressions of statistical concepts, whereas this result was 87.7% for public health faculty. It is also noteworthy that 9.2% of dentistry faculty reported understanding little or none of the expressions of statistical concepts listed in the attitude question, which was higher than any of the other disciplines.

Statistics Knowledge Assessment

A visual depiction of statistics knowledge assessment scores by discipline is presented (Figure 1) to display the distribution of correct responses for each discipline. Boxplots showed that the distribution for public health was shifted higher than the other disciplines and dentistry lower. The interquartile range for dentistry is substantially larger than for the other disciplines. The distributions for medicine, nursing, and pharmacy are similar to one another.

Figure 1. Boxplots displaying distributions for number of correct responses out of 8 questions by discipline.*

*The upper and lower ends of the box are the upper and lower quartiles. The median is marked by a horizontal line inside the box. The mean is symbolized with a diamond. Circles indicate outliers.

Summary statistics for statistics knowledge assessment scores by discipline are shown in Table 4. The overall percentage of correct answers across all questions and disciplines was 66.2%. For the discipline-specific results, public health performance was highest at 80.7% and dentistry lowest at 53.3%. Medicine (62.7%), nursing (63.6%), and pharmacy (63.8%) were comparable to one another. Aggregating across disciplines, the number of correct responses was highest and total scores above 80% for items dealing with observational studies, multiple testing, and standard errors. The lowest aggregated scores, which hovered around 50%, were for the items regarding randomization, confidence intervals, and odds ratios. While the overall average score for the question about understanding the difference between linear and logistic regression was 67.8%, the discipline-specific responses to this item varied considerably. Public health had an 89.9% correct response, whereas the scores for this question were much lower for dentistry (50.5%), medicine (57.3%), and pharmacy (59.0%). Interpreting an odds ratio appeared to be difficult for the majority of faculty respondents, with only public health (72.1%) scoring well above the 50% mark.

Table 4. Statistics knowledge assessment scores by discipline (n = 708)

The final model is displayed in Table 5, which includes parameter estimates and 95% confidence intervals (CI). Discipline and faculty characteristics explained 40.8% of the variability of number of correct responses (R2 = 0.408). After controlling for faculty characteristics, dentistry had on average 1.07 fewer correct responses out of the eight statistical knowledge questions (95% CI: [−1.45, −0.68]) than that of public health. As expected, controlling for discipline and other faculty characteristics, teachers of statistics/biostatistics performed considerably better on the assessment, with instructors answering 0.91 (95% CI: [0.60, 1.21]) more items correctly, on average. Controlling for the variables in the model, faculty who reported reading the peer-reviewed health literature on average correctly answered 1.35 (95% CI: [0.86, 1.83]) more items than those that did not. Having a research doctorate was associated with statistics knowledge, with such faculty scoring on average 0.28 additional items answered correctly (95% CI: [0.02, 0.54]). However, years of professional experience appeared to have had an inverse relationship with statistics knowledge. Controlling for discipline and other faculty characteristics, for each 10 additional years of professional experience, the number of correct responses on average decreased by 0.30 additional items (95% CI: [−0.44, −0.16]). Conversely, years as a faculty member had a positive association. Controlling for other variables in the model, for each 10 additional years as a faculty member, the number of correct responses on average increased by 0.27 additional items (95% CI: [0.12, 0.42]). We tested for and failed to find a moderating effect of discipline on the relationship between each faculty characteristic and the number of correct responses.

Table 5. General linear model results for modeling the expected number of correct responses (out of 8 questions) as a function of discipline and faculty characteristics*

* Discipline and faculty characteristics explained 40.8% of the variability of number of correct responses (R2 = 0.408).

Discussion

Ours is the first study we are aware of that involved prospective data collection to assess health science faculty knowledge of fundamental statistical concepts and enable comparisons of statistics knowledge across health science disciplines. As expected, the number of biostatistics courses completed was strongly positively associated with levels of statistics knowledge, and teachers of statistics/biostatistics performed better on the knowledge assessment.

The response rate was disappointing. Despite the necessity to sample more schools than originally intended in order to obtain the planned number of faculty, we did achieve our target sample size and believe that these results accurately reflect knowledge levels across content areas and disciplines. However, there are likely some biases in our study sample. We suspect that respondents performed better than non-respondents would have as we surmise that respondents were more likely to have an interest in and regularly use statistics in their daily work than nonrespondents. Hence, it is possible that these study results overestimate levels of statistics knowledge amongst health science faculty. That 95.8% of the study sample considered statistics somewhat or very important, may reflect the respondents’ motivation to respond and participate. Although faculty were clustered within institution, institution identity data were not collected to protect anonymity and so we were unable to quantify or account for any potential clustering effect. Additionally, for the same reason, there was no way to ensure the appropriateness of academic appointment for the survey respondents. The negative effect of years of experience as a working professional on knowledge contradicted the positive effect on knowledge of years of experience as a faculty member. One possible explanation for this conflicting result is faculty who spent some of their professional years outside of academia. For example, if in a clinical setting during those years, it could explain less involvement and daily contact with research and statistical methods.

It is also noteworthy that we have likely underestimated the response rates, as our estimated response rate assumes that each email invitation was received and opened. We did not have a way of knowing whether our email invitation was received and was opened by each faculty member. Some may have been filtered into a spam folder, and others may have missed the email message without even opening it. Thus, while the denominator in our response rate calculations was based on an ideal scenario that all emails sent out were received, opened, and considered, this is very unlikely to be the case, and thus the true response rate was likely higher.

We opted for a brief eight question instrument after considering the balance of completeness of response and holding participants’ attention through completion. Each of the statistical concepts was assessed with a single question, thus limiting our understanding of faculty’s depth and scope of understanding these concepts. Our decision to limit the questionnaire to eight questions rather than a more lengthy and detailed questionnaire was made after weighing the potential benefit of more information with the risk of lower response [Reference Stanton, Sinar, Balzer and Smith40Reference Maloney, Grawitch and Barber42]. In addition, we wanted to ensure that all questions included broad consideration of fundamental statistical topics relevant across these five disciplines.

We believe that the total number of correct responses out of eight questions informs and addresses our study aim regarding assessment of health science faculty knowledge of fundamental statistical concepts. It was for this reason that we applied a general linear model and fit a meaningful statistical model within this framework. The model explains 40.8% of the variability of number of correct responses, which is meaningful and informs our understanding of performance on the knowledge assessment.

There are likely a variety of factors that contribute to discipline differences in statistical knowledge and competency. We expected faculty in public health to perform better than the other disciplines, since the Association of Schools and Programs of Public Health (ASPPH) provides guidance for schools and programs with a recommended set of biostatistics competency guidelines [43]. Doctoral public health students are often required to complete at least a two-semester core biostatistics sequence, and those concentrating in epidemiology, biostatistics, or environmental science will undoubtedly complete additional biostatistics coursework. Conversely, there are no known biostatistics-specific competency guidelines in dentistry, nursing, pharmacy or medicine [Reference Hayat44Reference Oster and Enders45].

Dentistry faculty had the weakest performance on the statistics knowledge assessment, which is suggestive of the lack of biostatistics competencies in the attainment of the degree [Reference Kim and Dailey46]. Studies of dentists have found that despite their self-perceived notions of having strong statistical knowledge, objective quantitative assessments of statistical understanding have consistently shown low levels of knowledge [Reference Scheutz, Andersen and Wulff7,Reference Best and Laskin47Reference Shetty, Al Rasheed and Albwardi48]. In fact, a study by Kim et al. found that more than half of the published reports in dentistry contain statistical errors [Reference Kim, Kim and Hong22].

The American Association of Colleges of Nursing (AACN) establishes education standards for graduate degree programs in the academic nursing discipline [49]. Unfortunately, in the very recent release of the new Essentials documents for nursing curricula, the word ‘statistics’ is not mentioned at all in the 88-page document, and statistics education or training is not mentioned in the extensive list of AACN curriculum standards. To date, no known biostatistics competency guidelines are in place for graduate nursing students [49].

The Accreditation Council for Pharmacy Education, the accrediting body for pharmacy programs in the US, published its most recent standards and key elements in 2015 [50]. Biostatistics is included among the required elements of the didactic doctor of pharmacy curriculum. However, the requirements are vague and terse: “Appropriate use of commonly employed statistical tests, management of data sets, and the evaluation of the validity of conclusions generated based on the application of those tests to the data sets.” No other mention of biostatistics appears in the standards document. However, with an increasing emphasis on research competence in doctor of pharmacy programs (Jiroutek et al, 2019 discusses this effort), most doctor of pharmacy programs now include at least one biostatistics course [Reference Hayat, Kim, Schwartz and Jiroutek38].

Our aim was to explore whether faculty had sufficient general statistics knowledge to topically read and understand the health literature. Interestingly, the vast majority of faculty surveyed reported reading the peer-reviewed health-related scientific literature. It logically follows that knowledge of fundamental statistical concepts is needed for accessibility and comprehension of what is read. Faculty respondents overwhelmingly indicated that they considered statistics important in their role as a researcher. Yet, despite the self-stated importance of statistics by the study participants, overall performance was poor, with an alarming average score of 66.2%.

It was interesting to find that years of professional experience was negatively associated with statistics knowledge. One possible explanation is that this metric serves as a proxy for the number of years since the participant was trained. If training was long ago, some or even much of that training may have been simply forgotten over time. Further, since statistical training may have increased over time in graduate health science training programs, this relationship between years of professional experience and current statistics knowledge may be muddled.

Health science faculty across all the disciplines performed poorly with public health having the highest average score and dentistry the lowest. While we cannot claim causality, we feel strongly that the specificity regarding biostatistics in the public health accreditation standards is a key factor in their faculty’s outperformance. Conversely, the lack of detail, focus, and emphasis on biostatistics education and training in accreditation standards may explain the underperformance of the others relative to public health.

Another consideration that very likely affects the statistics education process is the instructor qualification. The authors have observed that it is not uncommon for discipline-specific faculty who themselves have completed limited statistics coursework to be tapped as instructor for an introductory statistics course; yet, of course, it would be highly unusual for the converse to occur. For example, it would be highly unusual for someone without formal training in dentistry to teach dentistry. This consideration of appropriate qualifications may also be a downstream contributing factor to health science faculty’s lack of understanding observed in this study.

We expect little has changed across disciplines since 2017 when these data were collected. However, several recently published papers geared specifically toward medical research learners have the potential to affect positive change [Reference Enders, Lindsell and Welty51Reference Oster, Devick and Thurston53]. In the first of these papers, the authors developed a comprehensive set of statistical competencies for medical research learners [Reference Enders, Lindsell and Welty51]. The results were based on responses obtained from a prospective survey of doctoral-level statistics educators, most of whom were faculty at research institutions. Although focused on medicine, those authors provided a framework for considering statistical knowledge across the health sciences. In a follow-up study, that team assessed gaps in knowledge and training across the health spectrum [Reference Oster and Enders52]. Findings for their competencies labeled ‘Assess bias in publications’ and ‘Need for statistical consultation’ correspond well to our study findings, suggesting that these topics are not extensively covered in required coursework, but need to be.

Conclusion

This work has revealed specific fundamental statistical concepts for which health science faculty exhibit substantial and common lack of understanding. Curricula would benefit from well-defined biostatistics competency guidelines and core graduate coursework in biostatistics education. Health science faculty would undoubtedly benefit from professional opportunities for enhancing their knowledge and skill in working with biostatistics concepts and methods. Training workshops and continuing education offerings should be regularly offered at institutional levels to better prepare faculty to teach and train students on evidence-based health care best practices.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/cts.2021.820.

Disclosures

The authors have no conflicts of interest to declare.

Ethical Approval

The approval was obtained from the Georgia State University and University of North Carolina at Chapel Hill Institutional Review Boards.

References

Hayat, MJ, Powell, A, Johnson, T, Cadwell, BL. Statistical methods used in the public health literature and implications for training of public health professionals. PLoS One 2017; 12(6): e0179032.CrossRefGoogle ScholarPubMed
Schmidt, H. Integrating the teaching of basic sciences, clinical sciences, and biopsychosocial issues. Acad Med 1998; 73(Suppl 9): S24S31.CrossRefGoogle ScholarPubMed
Abate, M, Stamatakis, MK, Haggett, RR. Excellence in curriculum development and assessment. American Journal of Pharmaceutical Education 2003; 67(1): 478500.CrossRefGoogle Scholar
Buja, LM. Medical education today: all that glitters is not gold. BMC Medical Education 2019; 19: 110.CrossRefGoogle Scholar
Wulff, HR, Andersen, B, Brandenhoff, P, Guttler, F. What do doctors know about statistics? Statistics in Medicine 1987; 6(1): 310.CrossRefGoogle ScholarPubMed
Polychronopoulou, A, Eliades, T, Taoufik, K, Papadopoulos, MA, Athanasiou, AE. Knowledge of European orthodontic postgraduate students on biostatistics. European Journal of Orthodontics 2011; 33(4): 434440.CrossRefGoogle ScholarPubMed
Scheutz, F, Andersen, B, Wulff, HR. What do dentists know about statistics? Scandinavian Journal of Dental Research 1988; 96(4): 281287.Google ScholarPubMed
Horton, NJ, Switzer, SS. Statistical methods in the journal. The New England Journal of Medicine 2005; 353(18): 19771979.CrossRefGoogle ScholarPubMed
Pocock, SJ, Hughes, MD, Lee, RJ. Statistical problems in the reporting of clinical trials. A survey of three medical journals. The New England Journal of Medicine 1987; 317(7): 426432.CrossRefGoogle ScholarPubMed
Gardner, MJ, Bond, J. An exploratory study of statistical assessment of papers published in the British Medical Journal. JAMA 1990; 263(10): 13551357.CrossRefGoogle ScholarPubMed
McGuigan, SM. The use of statistics in the British Journal of Psychiatry. British Journal of Psychiatry 1995; 167(5): 683688.CrossRefGoogle ScholarPubMed
Welch, GE 2nd, Gabbe, SG. Review of statistics usage in the American Journal of Obstetrics and Gynecology. American Journal of Obstetrics and Gynecology 1996; 175(5): 11381141.CrossRefGoogle ScholarPubMed
Campbell, MK, Grimshaw, JM. Cluster randomised trials: time for improvement. The implications of adopting a cluster design are still largely being ignored. BMJ 1998; 317(7167): 11711172.CrossRefGoogle ScholarPubMed
Tu, YK, Kellett, M, Clerehugh, V, Gilthorpe, MS. Problems of correlations between explanatory variables in multiple regression analyses in the dental literature. British Dental Journal 2005; 199(7): 457461. doi: 10.1038/sj.bdj.4812743. Accessed January 28, 2021.CrossRefGoogle ScholarPubMed
Fleming, PS, Koletsi, D, Polychronopoulou, A, Eliades, T, Pandis, N. Are clustering effects accounted for in statistical analysis in leading dental specialty journals? Journal of Dentistry 2013; 41(3): 265270.CrossRefGoogle ScholarPubMed
Anthony, D. A review of statistical methods in the Journal of Advanced Nursing. Journal of Advanced Nursing 1996; 24(5): 10891094.CrossRefGoogle ScholarPubMed
Conway, A, Rolley, JX, Fulbrook, P, Page, K, Thompson, DR. Improving statistical analysis of matched case-control studies. Research in Nursing & Health 2013; 36(3): 320324.CrossRefGoogle ScholarPubMed
Gaskin, CJ, Happell, B. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research. International Journal of Nursing Studies 2014; 51(5): 795806.CrossRefGoogle ScholarPubMed
Murphy, JE, Slack, MK, Boesen, KP, Kirking, DM. Research-related coursework and research experiences in doctor of pharmacy programs. American Journal of Pharmaceutical Education 2007; 71(6): 113.CrossRefGoogle ScholarPubMed
Sheehan, AH, Wagner, LE, Sowinski, KM. Student pharmacists’ knowledge of biostatistical and literature evaluation concepts. Currents in Pharmacy Teaching and Learning 2016; 8(5): 622628.CrossRefGoogle Scholar
Bookstaver, PB, Miller, AD, Felder, TM, Tice, DL, Norris, LB, Sutton, SS. Assessing pharmacy residents’ knowledge of biostatistics and research study design. Annals of Pharmacotherapy 2012; 46(7–8): 991999.CrossRefGoogle ScholarPubMed
Kim, JS, Kim, DK, Hong, SJ. Assessment of errors and misused statistics in dental research. International Dental Journal 2011; 61(3): 163167.CrossRefGoogle ScholarPubMed
Colditz, GA, Emerson, JD. The statistical content of published medical research: some implications for biomedical education. Medical Education 1985; 19(3): 248255.CrossRefGoogle ScholarPubMed
Reed, JF 3rd, Salen, P, Bagher, P. Methodological and statistical techniques: what do residents really need to know about statistics? Journal of Medical Systems 2003; 27(3): 233238.CrossRefGoogle ScholarPubMed
Rigby, AS, Armstrong, GK, Campbell, MJ, Summerton, N. A survey of statistics in three UK general practice journal. BMC Medical Research Methodology 2004; 4: 28.CrossRefGoogle ScholarPubMed
Kurichi, JE, Sonnad, SS. Statistical methods in the surgical literature. Journal of the American College of Surgeons 2006; 202(3): 476484.CrossRefGoogle ScholarPubMed
Zaman, Q, Azam, M, Pfeiffer, KP, Strasak, AM. Statistical methods and complexity of data analysis in recent surgical research. Elixir International Journal, Human Physiology 2011; 35: 29612963.Google Scholar
Strasak, AM, Zaman, Q, Marinell, G, Pfeiffer, KP, Ulmer, H. The use of statistics in medical research. The American Statistician 2007; 61(1): 4755.CrossRefGoogle Scholar
Zellner, K, Boerst, CJ, Tabb, W. Statistics used in current nursing research. Journal of Nursing Education 2007; 46(2): 5559.Google ScholarPubMed
Malone, HE, Coyne, I. A review of commonly applied statistics in JAN. Journal of Advanced Nursing 2017; 73(8): 17711773.CrossRefGoogle ScholarPubMed
Hokanson, JA, Berina, LF, Guernsey, BG, Doutre’, WH, Bryant, SG. Do pharmacy schools need to increase the statistical education of their students? American Journal of Pharmaceutical Education 1985; 49: 4649.Google Scholar
Lee, CM, Soin, HK, Einarson, TR. Statistics in the pharmacy literature. Annals of Pharmacotherapy 2004; 38(9): 14121418.CrossRefGoogle ScholarPubMed
Pocock, SJ, Collier, TJ, Dandreo, KJ, et al. Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ 2004; 329(7471): 883.CrossRefGoogle ScholarPubMed
Merrill, RM, Stoddard, J, Shields, EC. The use of statistics in the American Journal of Health Education from 1994 through 2003: a content analysis. American Journal of Health Education 2004; 35(5): 290297.CrossRefGoogle Scholar
Arcila Quiceno, V, García Restrepo, E, Gómez Rúa, N, Montenegro Martínez, G, Silva Ayçaguer, LC. Statistical tools in published articles of a public health journal in 2013 and 2014: bibliometric cross-sectional study. Medwave 2015; 15(7): e6238. Published 2015 Aug 31.CrossRefGoogle ScholarPubMed
Hayat, MJ, Kim, M, Schwartz, TA, Jiroutek, MR. Assessment of dental faculty members’ understanding of statistical concepts. Health Professions Education 2020; 6(1): 9298.CrossRefGoogle Scholar
Jiroutek, MR, Kim, M, Hayat, MJ, Taylor, MN, Schwartz, TA. A cross-sectional assessment of statistical knowledge among pharmacy faculty. Currents in Pharmacy Teaching & Learning 2019; 11(8): 793801.CrossRefGoogle ScholarPubMed
Hayat, MJ, Kim, M, Schwartz, TA, Jiroutek, MR. A study of statistics knowledge amongst nurse faculty in schools with doctoral programs. Nursing Outlook 2020; 69(2): 228233.CrossRefGoogle Scholar
Qualtrics, Copyright © 2017 Qualtrics. Qualtrics and all other Qualtrics product or service names are registered trademarks or trademarks of Qualtrics, Provo, UT, USA. (https://www.qualtrics.com)Google Scholar
Stanton, JM, Sinar, EF, Balzer, WK, Smith, PC. Issues and strategies for reducing the length of self-report scales. Personnel Psychology 2002; 55(1): 167193.CrossRefGoogle Scholar
Galesic, M, Bosnjak, M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly 2009; 73: 349 360.CrossRefGoogle Scholar
Maloney, P, Grawitch, MJ, Barber, LK. Strategic item selection to reduce survey length: reduction in validity? Consulting Psychology Journal: Practice and Research 2011; 63(3): 162175.CrossRefGoogle Scholar
Master’s Degree in Public Health Core Competency Development Project Report, version 2.3. Association of Schools of Public Health, Washington, DC, 2006 Google Scholar
Hayat, MJ. Guidelines for Assessment and Instruction in Statistics Education (GAISE): extending GAISE into nursing education. Journal of Nursing Education 2014; 53(4): 192198.CrossRefGoogle ScholarPubMed
Oster, RA, Enders, FT. The importance of statistical competencies for medical research learners. Journal of Statistics Education 2018; 26(2): 137142.CrossRefGoogle ScholarPubMed
Kim, JS, Dailey, RJ. Biostatistics for Oral Healthcare. Wiley-Blackwell Publishing, Oxford, 2007.CrossRefGoogle Scholar
Best, AM, Laskin, DM. Oral and maxillofacial surgery residents have poor understanding of biostatistics. Journal of Oral and Maxillofacial Surgery 2013; 71(1): 227234.CrossRefGoogle ScholarPubMed
Shetty, AC, Al Rasheed, AM, Albwardi, SA. Dental professionals’ attitude towards bio-statistics. Journal of Dental Oral Hygiene 2015; 7(7): 113118.Google Scholar
American Association of Colleges of Nursing. The essentials: core competencies for professional nursing education; [Internet] 2020 [cited June 24, 2021]. (https://www.aacnnursing.org/Portals/42/AcademicNursing/pdf/Essentials-2021.pdf)Google Scholar
Accreditation Council for Pharmacy Education. Accreditation standards and key elements for the professional program in pharmacy leading to the doctor of pharmacy degree (“standards 2016”) [Internet], February 2015 [cited Feb 11, 2021]. (https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf)Google Scholar
Enders, FT, Lindsell, CJ, Welty, LJ, et al. Statistical competencies for medical research learners: What is fundamental? Journal of Clinical and Translational Science 2017; 1(3): 146152.CrossRefGoogle ScholarPubMed
Oster, RA, Enders, FT. The importance of statistical competencies for medical research learners. Journal of Statistics Education 2018; 26(2): 137142.CrossRefGoogle ScholarPubMed
Oster, RA, Devick, KL, Thurston, SW, et al. Learning gaps among statistical competencies for clinical and translational science learners. Journal of Clinical and Translational Science 2020; 5(1): e12.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Target population

Figure 1

Table 2. Summary of sampling and response across disciplines

Figure 2

Table 3. Summary of faculty characteristics (n = 708)

Figure 3

Figure 1. Boxplots displaying distributions for number of correct responses out of 8 questions by discipline.**The upper and lower ends of the box are the upper and lower quartiles. The median is marked by a horizontal line inside the box. The mean is symbolized with a diamond. Circles indicate outliers.

Figure 4

Table 4. Statistics knowledge assessment scores by discipline (n = 708)

Figure 5

Table 5. General linear model results for modeling the expected number of correct responses (out of 8 questions) as a function of discipline and faculty characteristics*

Supplementary material: File

Hayat et al. supplementary material

Hayat et al. supplementary material
Download Hayat et al. supplementary material(File)
File 33.3 KB