To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many clinical trials leverage real-world data. Typically, these data are manually abstracted from electronic health records (EHRs) and entered into electronic case report forms (CRFs), a time and labor-intensive process that is also error-prone and may miss information. Automated transfer of data from EHRs to eCRFs has the potential to reduce data abstraction and entry burden as well as improve data quality and safety.
We conducted a test of automated EHR-to-CRF data transfer for 40 participants in a clinical trial of hospitalized COVID-19 patients. We determined which coordinator-entered data could be automated from the EHR (coverage), and the frequency with which the values from the automated EHR feed and values entered by study personnel for the actual study matched exactly (concordance).
The automated EHR feed populated 10,081/11,952 (84%) coordinator-completed values. For fields where both the automation and study personnel provided data, the values matched exactly 89% of the time. Highest concordance was for daily lab results (94%), which also required the most personnel resources (30 minutes per participant). In a detailed analysis of 196 instances where personnel and automation entered values differed, both a study coordinator and a data analyst agreed that 152 (78%) instances were a result of data entry error.
An automated EHR feed has the potential to significantly decrease study personnel effort while improving the accuracy of CRF data.
Coronavirus Disease 2019 (COVID-19) instigated a flurry of clinical research activity. The unprecedented pace with which trials were launched left an early void in data standardization, limiting the potential for subsequent data pooling. To facilitate data standardization across emerging studies, the National Heart, Lung, and Blood Institute (NHLBI) charged two groups with harmonizing data collection, and these groups collaborated to create a concise set of COVID-19 Common Data Elements (CDEs) for clinical research.
Our iterative approach followed three guiding principles: 1) draw from existing multi-center COVID-19 clinical trials as precedents, 2) incorporate existing data elements and data standards whenever possible, and 3) alignment to data standards that facilitate data sharing and regulatory submission. We also supported rapid implementation of the CDEs in NHLBI-funded studies and iteratively refined the CDEs based on feedback from those study teams
The NHLBI COVID-19 CDEs are publicly available and being used for current COVID-19 clinical trials. CDEs are organized into domains, and each data element is classified within a three-tiered prioritization system. The CDE manual is hosted publicly at https://nhlbi-connects.org/common_data_elements with an accompanying data dictionary and implementation guidance.
The NHLBI COVID-19 CDEs are designed to aid data harmonization across studies to achieve the benefits of pooled analyses. We found that organizing CDE development around our three guiding principles focused our efforts and allowed us to adapt as COVID-19 knowledge advanced. As these CDEs continue to evolve, they could be generalized for use in other acute respiratory illnesses.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
As clinical trials were rapidly initiated in response to the COVID-19 pandemic, Data and Safety Monitoring Boards (DSMBs) faced unique challenges overseeing trials of therapies never tested in a disease not yet characterized. Traditionally, individual DSMBs do not interact or have the benefit of seeing data from other accruing trials for an aggregated analysis to meaningfully interpret safety signals of similar therapeutics. In response, we developed a compliant DSMB Coordination (DSMBc) framework to allow the DSMB from one study investigating the use of SARS-CoV-2 convalescent plasma to treat COVID-19 to review data from similar ongoing studies for the purpose of safety monitoring.
The DSMBc process included engagement of DSMB chairs and board members, execution of contractual agreements, secure data acquisition, generation of harmonized reports utilizing statistical graphics, and secure report sharing with DSMB members. Detailed process maps, a secure portal for managing DSMB reports, and templates for data sharing and confidentiality agreements were developed.
Four trials participated. Data from one trial were successfully harmonized with that of an ongoing trial. Harmonized reports allowing for visualization and drill down into the data were presented to the ongoing trial’s DSMB. While DSMB deliberations are confidential, the Chair confirmed successful review of the harmonized report.
It is feasible to coordinate DSMB reviews of multiple independent studies of a similar therapeutic in similar patient cohorts. The materials presented mitigate challenges to DSMBc and will help expand these initiatives so DSMBs may make more informed decisions with all available information.
The Clinical and Translational Science Award Program (CTSA) Trial Innovation Network (TIN) was launched in 2016 to increase the efficiency and effectiveness of multisite trials by supporting the development of national infrastructure. With the advent of the COVID-19 pandemic, it was therefore well-positioned to support clinical trial collaboration. The TIN was leveraged to support two initiatives: (1) to create and evaluate a mechanism for coordinating Data and Safety Monitoring Board (DSMB) activities among multiple ongoing trials of the same therapeutic agents, and (2) to share data across clinical trials so that smaller, likely underpowered studies, could be combined to produce meaningful and actionable data through pooled analyses. The success of these initiatives was understood to be dependent upon the willingness of investigators, study teams, and US National Institutes of Health research networks to collaborate and share information.
To inform these two initiatives, we conducted semistructured interviews with members of CTSA hubs and clinical research stakeholders that probed barriers and facilitators to collaboration. Thematic analysis identified topics relevant across institutions, individuals, and DSMBs.
The DSMB coordination initiative was viewed as less controversial, while the data pooling initiative was seen as complex because of its potential impact on publication, authorship, and the rewards of discovery. Barriers related to resources, centralization, and technical work were significant, but interviewees suggested these could be handled by the provision of central funding and supportive frameworks. The more intractable findings were related to issues around credit and ownership of data.
Based on our interviews, we conclude with nine recommended actions that can be implemented to support collaboration.
Rigorous scientific review of research protocols is critical to making funding decisions, and to the protection of both human and non-human research participants. Given the increasing complexity of research designs and data analysis methods, quantitative experts, such as biostatisticians, play an essential role in evaluating the rigor and reproducibility of proposed methods. However, there is a common misconception that a statistician’s input is relevant only to sample size/power and statistical analysis sections of a protocol. The comprehensive nature of a biostatistical review coupled with limited guidance on key components of protocol review motived this work. Members of the Biostatistics, Epidemiology, and Research Design Special Interest Group of the Association for Clinical and Translational Science used a consensus approach to identify the elements of research protocols that a biostatistician should consider in a review, and provide specific guidance on how each element should be reviewed. We present the resulting review framework as an educational tool and guideline for biostatisticians navigating review boards and panels. We briefly describe the approach to developing the framework, and we provide a comprehensive checklist and guidance on review of each protocol element. We posit that the biostatistical reviewer, through their breadth of engagement across multiple disciplines and experience with a range of research designs, can and should contribute significantly beyond review of the statistical analysis plan and sample size justification. Through careful scientific review, we hope to prevent excess resource expenditure and risk to humans and animals on poorly planned studies.
Statistical literacy is essential in clinical and translational science (CTS). Statistical competencies have been published to guide coursework design and selection for graduate students in CTS. Here, we describe common elements of graduate curricula for CTS and identify gaps in the statistical competencies.
We surveyed statistics educators using e-mail solicitation sent through four professional organizations. Respondents rated the degree to which 24 educational statistical competencies were included in required and elective coursework in doctoral-level and master’s-level programs for CTS learners. We report competency results from institutions with Clinical and Translational Science Awards (CTSAs), reflecting institutions that have invested in CTS training.
There were 24 CTSA-funded respondents representing 13 doctoral-level programs and 23 master’s-level programs. For doctoral-level programs, competencies covered extensively in required coursework for all doctoral-level programs were basic principles of probability and hypothesis testing, understanding the implications of selecting appropriate statistical methods, and computing appropriate descriptive statistics. The only competency extensively covered in required coursework for all master’s-level programs was understanding the implications of selecting appropriate statistical methods. The least covered competencies included understanding the purpose of meta-analysis and the uses of early stopping rules in clinical trials. Competencies considered to be less fundamental and more specialized tended to be covered less frequently in graduate courses.
While graduate courses in CTS tend to cover many statistical fundamentals, learning gaps exist, particularly for more specialized competencies. Educational material to fill these gaps is necessary for learners pursuing these activities.
Acute care research (ACR) is uniquely challenged by the constraints of recruiting participants and conducting research procedures within minutes to hours of an unscheduled critical illness or injury. Existing competencies for clinical research professionals (CRPs) are gaining traction but may have gaps for the acute environment. We sought to expand existing CRP competencies to include the specialized skills needed for ACR settings.
Qualitative data collected from job shadowing, clinical observations, and interviews were analyzed to assess the educational needs of the acute care clinical research workforce. We identified competencies necessary to succeed as an ACR-CRP, and then applied Bloom’s Taxonomy to develop characteristics into learning outcomes that frame both knowledge to be acquired and job performance metrics.
There were 28 special interest competencies for ACR-CRPs identified within the eight domains set by the Joint Task Force (JTF) of Clinical Trial Competency. While the eight domains were not prioritized by the JTF, in ACR an emphasis on Communication and Teamwork, Clinical Trials Operations, and Data Management and Informatics was observed. Within each domain, distinct proficiencies and unique personal characteristics essential for success were identified. The competencies suggest that a combination of competency-based training, behavioral-based hiring practices, and continuing professional development will be essential to ACR success.
The competencies developed for ACR can serve as a training guide for CRPs to be prepared for the challenges of conducting research within this vulnerable population. Hiring, training, and supporting the development of this workforce are foundational to clinical research in this challenging setting.
It is increasingly essential for medical researchers to be literate in statistics, but the requisite degree of literacy is not the same for every statistical competency in translational research. Statistical competency can range from ‘fundamental’ (necessary for all) to ‘specialized’ (necessary for only some). In this study, we determine the degree to which each competency is fundamental or specialized.
We surveyed members of 4 professional organizations, targeting doctorally trained biostatisticians and epidemiologists who taught statistics to medical research learners in the past 5 years. Respondents rated 24 educational competencies on a 5-point Likert scale anchored by ‘fundamental’ and ‘specialized.’
There were 112 responses. Nineteen of 24 competencies were fundamental. The competencies considered most fundamental were assessing sources of bias and variation (95%), recognizing one’s own limits with regard to statistics (93%), identifying the strengths, and limitations of study designs (93%). The least endorsed items were meta-analysis (34%) and stopping rules (18%).
We have identified the statistical competencies needed by all medical researchers. These competencies should be considered when designing statistical curricula for medical researchers and should inform which topics are taught in graduate programs and evidence-based medicine courses where learners need to read and understand the medical research literature.
Email your librarian or administrator to recommend adding this to your organisation's collection.