Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-21T15:42:34.888Z Has data issue: false hasContentIssue false

Indices of clinical research coordinators’ competence

Published online by Cambridge University Press:  24 July 2019

Carlton A. Hornung*
Affiliation:
Consortium of Academic Programs in Clinical Research Department of Medicine, University of Louisville School of Medicine, Louisville, KY, USA
Phillip A. Ianni
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Carolynn T. Jones
Affiliation:
College of Nursing, The Ohio State University, Columbus, OH, USA
Elias M. Samuels
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Vicki L. Ellingrod
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA College of Pharmacy, University of Michigan, Ann Arbor, MI, USA
*
Address for correspondence: C. A. Hornung, PhD, MPH, Department of Medicine, University of Louisville, 18613 John Connor Rd., Cornelius, NC 28031, USA. Email: CAHornung@Louisville.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

There is a clear need to educate and train the clinical research workforce to conduct scientifically sound clinical research. Meeting this need requires the creation of tools to assess both an individual’s preparedness to function efficiently in the clinical research enterprise and tools to evaluate the quality and effectiveness of programs that are designed to educate and train clinical research professionals. Here we report the development and validation of a competency self-assessment entitled the Competency Index for Clinical Research Professionals, version II (CICRP-II).

Methods:

CICRP-II was developed using data collected from clinical research coordinators (CRCs) participating in the “Development, Implementation and Assessment of Novel Training In Domain-Based Competencies” (DIAMOND) project at four clinical and translational science award (CTSA) hubs and partnering institutions.

Results:

An exploratory factor analysis (EFA) identified a two-factor structure: the first factor measures self-reported competence to perform Routine clinical research functions (e.g., good clinical practice regulations (GCPs)), while the second factor measures competence to perform Advanced clinical functions (e.g., global regulatory affairs). We demonstrate the between groups validity by comparing CRCs working in different research settings.

Discussion:

The excellent psychometric properties of CICRP-II and its ability to distinguish between experienced CRCs at research-intensive CTSA hubs and CRCs working in less-intensive community-based sites coupled with the simplicity of alternative methods for scoring respondents make it a valuable tool for gauging an individual’s perceived preparedness to function in the role of CRC as well as an equally valuable tool to evaluate the value and effectiveness of clinical research education and training programs.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Association for Clinical and Translational Science 2019

Introduction

The timely and successful translation of pharmaceuticals and medical devices into clinical applications to improve human health requires a well-prepared and competent workforce of clinical research professionals that includes principal investigators, research coordinators, monitors, administrators, regulatory affairs experts, informaticians, data managers, statisticians, and others. Appropriate training and mastery of the competencies characterizing each role in the research process is essential for the efficient conduct of clinical and translational research [Reference Hornung1Reference Califf3]. Accordingly, there is a critical need for tools to assess an individual’s preparedness to execute his or her role in the research process; tools to assess an individual’s need for continuing education and training; and, tools to evaluate the quality of education and training programs that prepare individuals to work in the clinical research enterprise.

Several steps have been taken to identify the core competencies that define the clinical research profession. An initial step was undertaken by the Joint Task Force (JTF) on the Harmonization of Competencies for the Clinical Research Profession[Reference Jones47]. The JTF was composed of key stakeholders in the clinical research enterprise, including representatives from academic institutions; the pharmaceutical industry; clinical research professional organizations including the Association of Clinical Research Professionals (ACRP), Society of Clinical Research Associates (SoCRA), and the Consortium of Academic Programs in Clinical Research (CoAPCR). The JTF identified eight distinct theoretical domains consisting of 51 core competencies that characterize the clinical research process. The eight competency domains they identified are:

  • Scientific concepts and research design (SC): Knowledge of scientific concepts related to the design and analysis of clinical trials.

  • Ethical and participant safety considerations (EP): Care of patients, aspects of human subject protection and safety in the conduct of a clinical trial.

  • Medicines development and regulation (MD): Knowledge of how drugs, devices, and biologicals are developed and regulated.

  • Clinical trial operations (CT): Study management, GCP compliance, safety management, and handling of investigational product.

  • Study and site management (SM): Site and study operations.

  • Data management and informatics (DM): How data are acquired and managed during a clinical trial.

  • Leadership and professionalism (LP): The principles and practices of leadership and professionalism in clinical research.

  • Communication and teamwork (TW): All elements of communication within the site and between the site and sponsor. Teamwork skills necessary for conducting a clinical trial.

The JTF’s efforts were expanded by work supported by the National Center for Advancing Translational Sciences (NCATS) at the National Institutes of Health (NIH). In 2015, NCATS supported the implementation of the Enhancing Clinical Research Professionals’ Training and Qualification (ECRPTQ) project [Reference Shanley8]. Subsequently, NCATS supported the initiation of the Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND) project in 2017. The primary aim of the DIAMOND project was the creation of a federated database structured around the ECRPTQ competency framework to curate information about research training opportunities for clinical research professionals working throughout the CTSA consortium. The DIAMOND investigators’ second aim was the development and validation of competency-based tools designed to assess the ability of clinical research professionals to perform their roles in the clinical research enterprise and to evaluate the need for and quality of clinical research education and training programs.

A competency-based assessment inventory for principal investigators and physician scientists called the Clinical Research Appraisal Inventory (CRAI) [Reference Mullikin, Bakken and Betz9] was developed by Mullikin and colleagues. While the CRAI has undergone several modifications [Reference Lipira10Reference Eller, Lev and Bakken12], it has become the standard for assessing self-perceived competency of principal investigators (PI) to conduct clinical trials. Some of the competencies included in the inventory are not functions that would ordinarily be performed by the other members of the clinical research team who play essentially adjuvant and supporting roles (e.g., research managers, regulatory affairs specialists, data managers) in the research process. Accordingly, in our view, the CRAI is not optimal for assessing the preparedness or training needs of non-PI team members or to assess the utility of educational programs that are essential to prepare them to efficiently carry out their functions defined in the research protocol. However, a tool comparable to CRAI was not available to assess the competence or training needs of support personnel.

To address the need for such a tool, DIAMOND investigators collaborated with representatives of CoAPCR (CAH and CTJ) to analyze survey data collected by the JTF to create a tool to assess the preparedness of those playing supportive roles in the clinical research process. In 2014, JTF surveyed over 2000 clinical research professionals working in all roles and across a wide spectrum of clinical research settings (e.g., medical centers; contract research organizations [CRO]; private research settings; community hospitals) around the world to assess their self-perceived competence to perform the functions defined by the 51 core competencies. Details of the methods and results have been published elsewhere [Reference Hornung1]. Briefly, exploratory factor analysis (EFA) of data obtained from clinical research professionals employed in the USA or Canada identified 20 core competencies that formed five factors defining self-perceived competence to perform “General” research functions (e.g., GCPs) and four sub-scales reflecting specialized research functions: “Ethics and Patient Safety,” “Medicines Development,” “Data Management,” and “Scientific Concepts”. Together the “General Index” and the four specialized indices make up the first version of the Competency Index for Clinical Research Professionals (CICRP-I). The five measures were highly correlated and had high face validity with reasonable psychometric properties. Most importantly, scores on the General Index, the Ethics and Patient Safety scale, and the Medicine Development scale differed significantly (p < 0.05) among those who reported their role to be research coordinators, administrators, regulatory affairs specialists, or data managers [Reference Hornung1].

These findings suggested that there remains a need to create an assessment tool specifically for the role of clinical research coordinator (CRC) comparable to the CRAI that was created as an assessment tool for the role of principal investigator. We reasoned that in the routine performance of their role, CRCs perform a wide range of functions in the clinical research process and therefore they must be prepared to carry out the activities described by core competencies in each domain of clinical research identified by the JTF. Further, and in contrast to CRCs, other professionals in the research process perform what are essentially supporting roles (e.g., data managers, regulatory affairs specialists) and therefore need to have expertise in one or more competency domains related to their specialized area of responsibility. However, they need not have a command of the broad array of core competencies required of the CRC. We also reasoned that the CRCs working at the DIAMOND CTSA hubs are more likely to have greater experience in coordinating the most complicated trials and protocols across all phases of clinical and translational research in contrast to CRCs who work at less research-intensive sites such as community-based settings or private physician’s offices where they often double as care providers. Here we report the development of a tool to assess the self-perceived competence of CRCs and validate that tool by testing the hypothesis that CRCs working at CTSA hubs will demonstrate greater self-perceived competence than CRCs who function in less research-intensive settings.

Methods

Online surveys were administered utilizing existing email groups of clinical research professionals working at the DIAMOND CTSA hubs and their partnering hospitals: University of Michigan, Ohio State University and National Children’s Hospital, University of Rochester, and Tufts University and Tufts Medical Center. Each of the hubs used a standardized approach to recruitment with the recruitment language and the use of list-servs common to all sites. All four of these universities carry a Carnegie I classification as high research activity institutions, and we believe are representative of the other CTSA research-intensive settings in the United States. This research was determined to be exempt by the investigational review boards (IRBs) at all sites.

The survey solicited information on demographic characteristics, education, current and previous work experience in various clinical research roles and settings. The 95 respondents who reported that they were currently working as a CRC and had at least 1 year of CRC experience are the subjects of this analysis. These CRCs indicated how confident they felt to perform the functions defined by the 20 CICRP core competencies and their desire for additional training in each of the eight JTF competency domains. In scoring themselves on the core competencies, the DIAMOND CRC respondents used an 11-point format (i.e., 0 = “Not at all Confident” to 10 = “Completely Confident”), which is a scoring method similar to that used in CRAI. These data therefore provide a unique opportunity to develop an assessment tool for CRCs based on their self-perceived competence for work at research-intensive clinical and translational science award (CTSA) institutions.

An EFA with principal axis extraction and promax rotation (kappa = 4) was performed using SPSS 25. We determined the number of factors using a scree plot that shows the eigenvalue associated with each factor (Fig. 1). The scree plot failed to provide a clear indication of the number of factors but suggested a range from two to four factors. We excluded the 4-factor model because its eigenvalue was less than 1 – a conventional rule for model selection. We examined the pattern matrices of 2- and 3-factor models and rejected the 3-factor model because the factors were not coherent in the sense that each of the three factors did not clearly pertain to distinct competency domains or research functions and because the 3-factor solution involved a number of items that loaded on more than one factor.

Fig. 1. Scree plot of the eigenvalues. Abbreviation: CICRP-II, Competency Index for Clinical Research Professionals, version II.

In the 2-factor model, 19 of the 20 core competencies exhibited a high loading on either the first or second factor while a single item had low loading on the second factor and somewhat higher loading on the first. Nonetheless, we included that item on factor 2 so that each factor was defined by 10 core competencies (as opposed to 11 on factor 1 and 9 on factor 2). We calculated Cronbach’s Alpha to gauge the impact on reliability of including the questionable item on the second factor (i.e., a 10 and 10 item solution) versus on the first factor (i.e., an 11 and 9 item solution) and found that the 10 and 10 solution had an increase in Cronbach’s Alpha for factor 1 of 0.005 and an decrease of 0.004 for factor 2; differences were judged to be inconsequential particularly given the practical benefits of having two factors that are of equal length and that are easy to directly compare. Most importantly, however, in the 10 and 10 item solution each of the resulting factors was clearly defined by a different set of core competencies. The first factor was defined by core competencies that pertain to Routine functions (i.e., Good Clinical Practice) carried out by CRCs in their everyday professional activities, while the second factor was clearly defined by core competencies that pertain to more Advanced and specialized regulatory functions performed by CRCs. Table 1 shows the mean, standard deviation, and factor score coefficients for each item.

Table 1. Twenty CICRP items administered to DIAMOND CTSA sites (N = 95)

* Each competency statement is preceded by an abbreviation of one of the eight ECRPTQ core competency domains, and a number that indicates the original number for that competency statement. Abbreviations: CICRP, Competency Index for Clinical Research Professionals; CT, clinical trial operations; CTSA, clinical and translational science award; DM, data management and informatics; EP, ethical and participant safety considerations; LP, leadership and professionalism; MD, medicines development and regulation; SC, scientific concepts and research design; SS, study and site management.

** SC3 had a higher loading on Factor 1 (0.454) but was included in Factor 2 to equalize the number of Items in each factor. This had inconsequential effects on scale reliability.

Scoring the CICRP-II

We created three scales for both Routine Competencies (Factor 1) and for Advanced Competencies (Factor 2). The first are factor regression scores obtained by multiplying the respondent’s 0–10 self-rating times the item’s factor regression coefficient and summing across all 20 competency items for each factor. Factor regression scores have a mean of zero and unit variance with approximately 95% of cases having scores between ± 1.96. While factor regression scores are the most precise, using factor regression scoring is tedious, not easily applied in practice, and comparisons across populations can be problematic.

The second scoring method sums a respondent’s 0–10 self-rating of competence across the 10 core competencies defining Routine Competencies and the 10 items defining Advanced Competencies. This method uses only the 10 core competency items for each factor and is therefore easier to score but less precise than the factor regression method. The summed score for each factor has a potential range from 0 to 100 with higher scores indicating higher self-competency ratings.

The third scoring method dichotomized responses to the 20 competencies with 0–5 collapsed to indicate “Not Competent” (scored 0) and responses of 6–10 collapsed to indicate “Competent” (scored 1). A count score for the Routine and Advanced factors was created by simply counting the number of items on each factor that a respondent claimed competence. The count scores are easiest to calculate with each factor having a potential range from 0 to 10. (It should be noted that the cut point between 5 and 6 to define “competency” is arbitrary and users may prefer a lower or a higher point to define competence in their application.)

Table 2 gives the correlations between the scales under the three alternative scoring systems. The correlation between the two factors ranges from 0.627 to 0.688, depending on the scoring method used. This indicates that the two factors share less than 50% of the variance in the 20 core competencies, which indicates that although they are closely related, there are two distinct factors. Factor regression scores for each factor are highly correlated with their respective summed scores (0.991 and 0.986, respectively). This means that summed scores can be used in place of the more complicated factor regression methods without loss of precision. Further, both factor regression and factor summed scores are correlated at 0.899 or greater with their respective factor count scores indicating that the count scoring method can be used with very little loss of precision compared to calculating factor regression scores.

Table 2. Correlations between factors with alternative scoring methods (DIAMOND Data; N = 95)

Table 3 presents the statistical properties of each method for scoring the Routine and Advanced factors. The regression scores for both Routine and Advanced Competencies have zero means and variances near 1.0. Further, as one might expect, scores on both the summed score and count score for Routine Competency were higher than the comparable scores for Advanced Competency, suggesting that even experienced CRCs at research-intensive CTSAs feel more competent to deal with Routine clinical research functions than with more esoteric and Advanced functions. This evidence of internal consistency reliability is supported by Cronbach’s Alpha values of 0.913 (Routine Competency) and 0.911 (Advanced Competency) using the summed scoring method and 0.856 (Routine Competency) and 0.862 (Advanced Competency) using the count scoring method.

Table 3. Statistical characteristics with alternative scoring methods (DIAMOND Data; N = 95)

Validation of the CICRP-II

Our approach to validity and validation testing is consistent with those of Sullivan [Reference Sullivan13] and Kane [Reference Kane14]. Kane proposes an argument-based approach to validity that requires that claims of validity be judged according to the structure and plausibility of the validity argument. In order to advance a compelling argument about the validity of CICRP-II while simultaneously adhering to standard practices in clinical and translational research, this study focuses primarily on known groups validity (also known as between-groups validity)[Reference DeVellis15]. While other well-known types of validity tests can be equally rigorous, the narrow scope and focus of the present study prevented some further tests from being conducted such as tests of the predictive validity on long-term outcomes. These and other limitations are detailed later in this work.

To assess the between groups validity, we tested the hypothesis that CRCs at the DIAMOND sites would report higher self-perceived competence to perform both Routine and Advanced clinical research functions than CRCs working outside CTSA research intensive sites. We used data collected by similar survey research methods from two populations of clinical research professionals. The DIAMOND survey collected data from clinical research professionals at the four CTSA hubs. Ninety-five of those respondents who said that they were currently working as a CRC and had 1 year or more experience in that role constitute the sample of DIAMOND CRCs. The JTF surveyed clinical research professionals working in various research settings across the USA and Canada. Eighty-one respondents who said they were working as a CRC in one or another of these settings constitute the JTF CRC sample for this analysis.

Table 4 presents characteristics of the DIAMOND CRC and the JTF CRC samples. Two-thirds of the JTF CRC respondents reported having a bachelor’s degree or less with 28.4% reporting having a master’s degree and less than 5% having a doctorate compared to just over half of the DIAMOND CRCs having a bachelor’s, 35.8% with a master’s, and 10.5% with a doctoral degree. While years of education were not statistically significantly different between the JTF and DIAMOND samples, 22% of DIAMOND CRCs have academic credentials (i.e., post bachelors certificate and master’s) specifically in clinical research compared to only 7% of CRCs who responded to the JTF survey (p = 0.012). There are also significant differences between JTF and DIAMOND CRC respondents in their years of clinical research experience (p = 0.035). Some 37% of JTF respondents have 11 or more years of experience in clinical research compared to only 19% of DIAMOND respondents. Further, there are notable differences in professional society membership. The professional organization of choice among CRCs responding to the JTF survey was ACPR, whereas CRCs at the CTSA hubs were more likely to be members of SoCRA and over 80% of members of each society had passed their society’s certification examinations.

Table 4. Characteristics of CRCs in the JTF and Diamond survey data

Abbreviations: ACRP, Association of Clinical Research Professionals; CRC, clinical research coordinators; JTF, Joint Task Force; SoCRA, Society of Clinical Research Associates.

Results

We calculated scores for the CICRP-I General Index and its four subscales as well as scores on the Routine and Advanced Factor (CICRP-II) for CRC respondents to the JTF survey (N = 81) and the CRC respondents to the DIAMOND survey (N = 95). We used the simplified count scoring method rather than the arithmetically cumbersome factor regression scoring or the summed scoring method because the JTF data were only available as dichotomous responses to the core competencies. The results are shown in Table 5.

Table 5. Self-assessed competency of CRCs on CICRP-I and CICRP-II. [JTF (N = 81) and DIAMOND Surveys (N = 95)]

Abbreviations: CICRP, Competency Index for Clinical Research Professionals; CRC, clinical research coordinators; JTF, Joint Task Force.

CRCs from the DIAMOND CTSA hubs scored much higher than the JTF CRCs on the General Index and all four subscales in the CICRP-I and both the Routine and Advanced factors on CICRP-II. The differences were statistically detectable beyond the 0.001 level for all comparisons except the CICRP-I Medicine Development subscale. These large and consistent differences between CRCs participating in the JTF survey and those participating in the DIAMOND project in both CICRP-I and II occur even though the JTF respondents had significantly more years of experience as a clinical research professional. These differences in self-perceived competence clearly justify the decision to create CICRP-II based only on experienced CRC respondents at research-intensive sites and clearly confirm between groups validity.

Discussion

While it is widely recognized that the clinical research enterprise requires a well-trained workforce that is competent to execute increasingly complex drug and device development protocols, it is important to understand the different settings in which trials are being conducted and that the key in all of these settings is the CRC and the centrality of their role in the increasingly high-stakes and complicated nature of clinical research. Competency-based training is critical to the task demands of CRCs. Preparing and retaining all types of clinical research professionals is made more difficult by frequent staff turnover and limited opportunities for advancement particularly at academic health centers [Reference Snyder1618].

Because CICRP-I was created from self-reported competency data collected from individuals working in all roles and across all settings in the clinical research enterprise, it is therefore a tool with general applicability to assess preparedness and training needs for all roles [Reference Bandura, Pajores and Urdan19]. The four subscales provide a degree of specificity in assessing competence and the need for training with scores on the subscales of CICRP-I used to identify an individual’s training needs – whether it be related to GCPs, ethics and patient safety, data management, or regulatory affairs pertaining to medicine and device development. In contrast, CICRP-II was created to measure the self-perceived competence of experienced CRCs at research-intensive sites and is therefore the “gold-standard” for assessing preparation and need for training related to Routine and/or Advanced research functions. The CICRP-II can be used as a high-precision tool for use in research (factor regression scores). Both CICRP I and CICRP-II are easy-to-use tools for self-assessment (summed or count scores) and as a quick-to-score tool for use in human resource offices for informal evaluation. Both the CICRP-I and CICRP- II indices and directions for scoring are available on the DIAMOND Portal at: https://clic-ctsa.org/diamond.

Either tool can be a source of data related to an individual’s self-perceived preparedness, for principal investigators or project managers to use when assessing whether an individual has the qualifications for a specific role on a research team and if not, what additional training would be necessary. Similarly, funding agencies, CROs, site management organizations (SMOs), and even IRBs could use CICRP data along with other assessment tools (e.g., CRAI) as an indicator of the overall readiness of a research team and whether the team includes individuals with competency across all domains of the research process. Finally, both measures can be recommended by an organization (e.g., ACRP and SoCRA) as a self-assessment tool to help an individual gauge his or her readiness to sit for the certification exam.

Assessing the competence of an individual to work in the clinical research enterprise through certification exams is necessary but only one aspect of assuring a competent workforce. There is also the need to evaluate and accredit education and training programs that purport to prepare individuals for clinical research work. There is currently no formal mechanism to assess the quality of the many training programs offered by various vendors both online and on-site. Most of these programs are subject to limited evaluation, and it is essential that assessment tools and procedures be developed to evaluate their quality and effectiveness. The CoAPCR, in collaboration with the pharmaceutical industry as well as professional organizations including ACRP and SoCRA, has in place a Council on Accrediting Allied Health Education Programs (CAAHEP) – approved process for evaluating and accrediting degree-granting academic programs. Currently, many CoAPCR member schools are utilizing the CICRP indices to evaluate their programs, and several have begun the formal accreditation process.

While CICRP-I and CICRP-II were developed to further the professionalization of the clinical research workforce, we do not recommend that either be used as the sole measure of an individual’s competence to function in the clinical research enterprise or as a sole measure of the quality or effectiveness of an education or training program. Self-assessments of competence are often prone to overconfidence, which can produce biased estimates of one’s actual level of competence [Reference Bjork, Gopher and Koriat20,Reference Dunning, Heath and Suls21]. The ultimate judge of the CICRP-I and CICRP-II tool will depend on their correlation with yet-to-be-developed objective measures of performance in the clinical research enterprise.

Limitations

There were notable differences between the JTF and DIAMOND surveys. The JTF survey included CRPs who were working in academic health centers and some of them may have been employed at a CTSA or another research-intensive institution. If there were respondents from CTSAs in the JTF survey, their presence would tend to reduce the magnitude of observed difference in perceived competence between JTF and DIAMOND CRCs. In other words, the differences across both CICRP-I and CICRP-II that we found are likely to be conservative.

There were also differences in the data collection methods particularly in terms of the core competencies. First, the JTF survey asked respondents to rate their self-perceived competence on 51 core competencies using a four-point scale that was dichotomized into “not competent” and “competent.” Factor analysis identified 20 core competencies forming five factors. The DIAMOND survey asked respondents to rate their self-perceived competence on these 20 core competencies on an 11-point scale that factor analytic methods yielded two factors. Second, the wording of the core competencies specified by JTF has undergone wordsmithing with input from several stakeholders beginning with the ECRPTQ project. The modifications were minor changes in grammar, simplification of sentence structure, or word changes such as replacing “clinical trial” with “clinical research” to make the item more broadly applicable. Regardless of how minor, some changes may have created non-negligible differences in the interpretation of the item and thus changes in the factor structure. Additional data, including that being collected at CoAPCR institutions as well as data from the clinical research workforce, are necessary to confirm the value of the CICRP indices for assessing the preparedness of the clinical research workforce.

Acknowledgments

The authors acknowledge several individuals who made substantial contributions to this work. Professor Susan Murphy, ScD, OTR and Mary Burgess provided editorial guidance, and the Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND) project team provided the material support for the research, including REDCap support and statistical assistance. This work was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through award number U01TR002013.

Financial Support

Supported by NCATS U01TR002013 “Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND)”; and in part by Ohio State University UL1T4002733, the Michigan Institute for Clinical and Health Research (MICHR), UL1TR002240, Tufts Clinical and Translational Science Institute (CTSI) UL1TR002544, and the University of Rochester’s Clinical and Translational Science Institute (UL1TR000042). In addition, support was received from the Enhancing Clinical Research Professionals’ Training and Qualification (ECRPTQ) UL1TR000433.

Disclosures

There are no conflicts of interest.

References

Hornung, CA, et al. Competency indices to assess the knowledge, skills and abilities of clinical research professionals. International Journal of Clinical Trials 2018; 5(1): 4653.CrossRefGoogle Scholar
Institute of Medicine. Transforming Clinical Research in the United States. Washington, DC: National Academies Press, 2010.Google Scholar
Califf, R, et al. Appendix D: Discussion Paper – The clinical trials enterprise in the United States: a call for disruptive innovation. In: Institute of Medicine, ed. Envisioning a Transformed Clinical Trials Enterprise in the United States: Establishing an Agenda for 2020. Washington, DC: National Academies Press; 2012.Google Scholar
Jones, CT, et al. Defining competencies in clinical research: Issues in clinical research education. Research Practitioner 2012; 13(3): 99107.Google Scholar
Sonstein, S, et al. Global self-assessment of competencies, role relevance, and training needs among clinical research professionals. Clinical Researcher 2016; 30(6): 3845. DOI:10.14524/CR-16-0016 Google Scholar
Sonstein, SA, et al. Moving from compliance to competency: A harmonized core competency framework for the clinical research professional. Clinical Researcher 2014; 28(3): 1723.Google Scholar
Joint Task Force for Clinical Trial Competency. Core competency framework, 2.0. 2017. Retrieved from http://clinicaltrialcompetency.org.Google Scholar
Shanley, T. Enhancing clinical research professionals’ training and qualifications. Retrieved from http://www.ctsa-gcp.org/. Accessed February 12, 2016.Google Scholar
Mullikin, E, Bakken, LL, Betz, NE. Assessing the research self-efficacy in physician scientists: The clinical research appraisal inventory. Journal of Clinical Assessment 2007; 88(9): 13401345.Google Scholar
Lipira, L, et al. Evaluation of clinical research training programs using the clinical research appraisal inventory. Clinical and Translational Science 2010; 3(5): 243248.CrossRefGoogle ScholarPubMed
Robinson, G, et al. A shortened version of the Clinical Research Appraisal Inventory: CRAI-12. Academic Medicine 2013; 88(9): 13401345.CrossRefGoogle ScholarPubMed
Eller, L, Lev, EL, Bakken, LL. Development and testing of the Clinical Research Appraisal Inventory-Short Form. Journal of Nursing Measurement 2014; 22(1): 106119.CrossRefGoogle ScholarPubMed
Sullivan, GM. A primer on the validity of assessment instruments. Journal of Medical Graduate Education 2011; 3(2): 119120.CrossRefGoogle ScholarPubMed
Kane, MT. An argument-based approach to validity. Psychological Bulletin 1992; 112(3): 527535.CrossRefGoogle Scholar
DeVellis, RF. Scale Development: Theory and Applications. 2nd ed. Newbury Park, CA: Sage Publications, 2003.Google Scholar
Snyder, D, et al. Retooling institutional support infrastructure for clinical research. Contemporary Clinical Trials 2016; 48: 139145.CrossRefGoogle ScholarPubMed
Causey, M. Professional pathways boost staff retention in clinical research settings. ACRP Blog; 2017. Retrieved from www.acrpnet.org/2017/04/24/professional-pathways-boost-staff-retention-clinical-research-settings/ Google Scholar
Applied Clinical Trials. Clinical trials talent survey report; 2018. Retrieved from =http://www.appliedclinicaltrialsonline.com/node/351341/done?sid=15167 Google Scholar
Bandura, A. A guide for constructing self efficacy scales, Chapter 14. In: Pajores, F, Urdan, T, eds. Self-Efficacy Beliefs of Adolescents. Greenwich, CT: Information Age Publishing; 2006. 307337.Google Scholar
Bjork, RA. Assessing our own competence: Heuristics and illusions. In: Gopher, D, Koriat, A eds. Attention and Performance XVII. Cognitive Regulation of Performance: Interaction of Theory and Application. Cambridge, MA: MIT Press; 1998. 435459.Google Scholar
Dunning, D, Heath, C, Suls, JM. Flawed self-assessment: Implications for health, education, and the workplace. Psychological Science in the Public Interest 2004; 5(3): 69106.CrossRefGoogle Scholar
Figure 0

Fig. 1. Scree plot of the eigenvalues. Abbreviation: CICRP-II, Competency Index for Clinical Research Professionals, version II.

Figure 1

Table 1. Twenty CICRP items administered to DIAMOND CTSA sites (N = 95)

Figure 2

Table 2. Correlations between factors with alternative scoring methods (DIAMOND Data; N = 95)

Figure 3

Table 3. Statistical characteristics with alternative scoring methods (DIAMOND Data; N = 95)

Figure 4

Table 4. Characteristics of CRCs in the JTF and Diamond survey data

Figure 5

Table 5. Self-assessed competency of CRCs on CICRP-I and CICRP-II. [JTF (N = 81) and DIAMOND Surveys (N = 95)]