To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Attacks on minoritized communities and increasing awareness of the societal causes of health disparities have combined to highlight deep systemic inequities. In response, academic health centers have prioritized justice, equity, diversity, and inclusion (JEDI) in their strategic goals. To have a sustained impact, JEDI efforts cannot be siloed; rather, they must be woven into the fabric of our work and systematically assessed to promote meaningful outcomes and accountability. To this end, the University of Pittsburgh’s Institute for Clinical Research Education assembled a task force to create and apply a rubric to identify short and long-term JEDI goals, assess the current state of JEDI at our Institute, and make recommendations for immediate action. To ensure deep buy-in, we gathered input from diverse members of our academic community, who served on targeted subcommittees. We then applied a three-step process to ensure rapid forward progress. We emerged with concrete actions for priority focus and a plan for ongoing assessment of JEDI institutionalization. We believe our process and rubric offer a scalable and adaptable model for other institutions and departments to follow as we work together across academic medical institutions to put our justice, equity, diversity, and inclusion goals into meaningful action.
High writing self-efficacy and self-regulation are tied to publication and grant submission. Writers with these attributes are more productive. We investigated whether participating in a Shut Up & Write!®-style intervention (SUAW) would produce statistically significant gains in writing self-efficacy and self-regulation when comparing pre-post-participation surveys.
Forty-seven medical students, TL1/KL2, and early-career faculty from across the USA expressed interest in participating, with 37 completing the pre-survey. We conducted (on Zoom) a 12-week SUAW series and measured the effect using a pre-post survey adapted from the Writer Self-Perception Scale. Paired t-tests (α = 0.05) were conducted on three subscales to test for significant differences between pre- and post-test means. The subscales reflected writing attitudes, writing strategies, and avoiding writing distractions. Subscales showed acceptable internal consistency with Cronbach’s alphas of 0.80, 0.71, and 0.72, respectively.
Twenty-seven participants attended at least one session. Of these, 81% presented as female, and 60% were from NIH-defined Underrepresented Backgrounds and/or were from Minority-Serving Institutions. Twenty-four completed both the pre- and post-surveys. Sixty percent previously participated in an activity similar to SUAW. We found significant improvements in writing attitudes (p = 0.020) and writing strategies (p = 0.041) for those who previously participated. For those who had not previously participated, we found improved writing strategies (p = 0.002). Eighty percent were very satisfied/satisfied with SUAW.
Researchers have tied writing self-efficacy and self-regulation to timely publication and grant submission. We found significant gains in self-efficacy and self-regulation, suggesting that participation in a SUAW-style intervention may increase writing productivity.
In 2015, the University of Pittsburgh partnered with several Minority Serving Institutions to develop the Leading Emerging and Diverse Scientists to Success (LEADS) Program. LEADS was designed to provide skills development, mentoring, and networking support to early career underrepresented faculty.
LEADS included three components: skills training (e.g., grant and manuscript writing and team science), mentoring, and networking opportunities. Scholars completed a pre- and post-test survey and an annual alumni survey that included measures on burnout, motivation, leadership, professionalism, mentoring, job and career satisfaction, networking, and an assessment of their research self-efficacy.
Scholars demonstrated a significant increase in their research self-efficacy having completed all the modules (t = 6.12; P < 0.001). Collectively, LEADS scholars submitted 73 grants and secured 46 grants for a 63% success rate. Most scholars either agreed or strongly agreed that their mentor was effective in helping to develop their research skills (65%) and provided effective counseling (56%). Scholars did experience increased burnout with 50% feeling burned out at the exit survey (t = 1.42; P = 0.16) and 58% reporting feelings of burnout at the most recent survey in 2020 (t = 3.96; P < 0.001).
Our findings support the claim that participation in LEADS enhanced critical research skills, provided networking and mentoring opportunities, and contributed to research productivity for scientists from underrepresented backgrounds.
The learning sciences have yielded a wealth of insights about the mechanisms and conditions that promote learning, yet the findings from this body of research often do not make their way into educational practice. This fundamentally translational problem is one we believe that educators from translational fields, with their evidence-based orientation and familiarity with the challenges and importance of translation, are well-positioned to address. Here, we provide a primer on the learning sciences to guide educators in the Clinical and Translational Science Institutes and other organizations that train translational researchers. We (a) describe the unique teaching and learning environment in which this training occurs, and why it necessitates attention to learning research and its appropriate application, (b) explain what the learning sciences are, (c) distill the complex science of learning into core principles, (d) situate recent developments in the field within these principles, and (e) explain, in practical terms, how these principles can inform our teaching.
Human-centered design (HCD) training offers the potential to improve both team processes and products. However, the use of HCD to improve the quality of team science is a relatively recent application, and its benefits and challenges have not been rigorously evaluated. We conducted a qualitative study with health sciences researchers trained in HCD methods. We aimed to determine how researchers applied HCD methods and perceived the benefits and barriers to using HCD on research teams.
We conducted 1-hour, semi-structured interviews with trainees from three training cohorts. Interviews focused on perceptions of the training, subsequent uses of HCD, barriers and facilitators, and perceptions of the utility of HCD to science teams. Data analysis was conducted using Braun and Clarke’s process for thematic analysis.
We interviewed nine faculty and nine staff trained in HCD methods and identified four themes encompassing HCD use, benefits, challenges, and tensions between HCD approaches and academic culture.
Trainees found HCD relevant to research teams for stakeholder engagement, research design, project planning, meeting facilitation, and team management. They also described benefits of HCD in five distinct areas: creativity, egalitarianism, structure, efficiency, and visibility. Our data suggest that HCD has the potential to help researchers work more inclusively and collaboratively on interdisciplinary teams and generate more innovative and impactful science. The application of HCD methods is not without challenges; however, we believe these challenges can be overcome with institutional investment.
High impact biomedical research is increasingly conducted by large, transdisciplinary, multisite teams in an increasingly collaborative environment. Thriving in this environment requires robust teamwork skills, which are not acquired automatically in the course of traditional scientific education. Team science skills training does exist, but most is directed at clinical care teams, not research teams, and little is focused on the specific training needs of early-career investigators, whose early team leadership experiences may shape their career trajectories positively or negatively. Our research indicated a need for team science training designed specifically for early-career investigators.
To address this need, we designed and delivered a 2-day workshop focused on teaching team science skills to early-career investigators. We operationalized team science competencies, sought the advice of team science experts, and performed a needs assessment composed of a survey and a qualitative study. Through these multiple approaches, we identified and grouped training priorities into three broad training areas and developed four robust, hands-on workshop sessions.
Attendees comprised 30 pre- and post-doc fellows (TL1) and early-career faculty (KL2 and K12). We assessed impact with a pre- and post-workshop survey adapted from the Team Skills Scale. Results from the pre- and post-test Wilcoxon signed-rank analysis (n = 25) showed statistically significant improvement in team science skills and confidence. Open-ended responses indicated that the workshop focus was appropriate and well targeted to the trainees’ needs.
Although team science education is still very much in its infancy, these results suggest that training targeted to early-career investigators improves team skills and may foster improved collaboration.
A national need is to prepare for and respond to accidental or intentional disasters categorized as chemical, biological, radiological, nuclear, or explosive (CBRNE). These incidents require specific subject-matter expertise, yet have commonalities. We identify 7 core elements comprising CBRNE science that require integration for effective preparedness planning and public health and medical response and recovery. These core elements are (1) basic and clinical sciences, (2) modeling and systems management, (3) planning, (4) response and incident management, (5) recovery and resilience, (6) lessons learned, and (7) continuous improvement. A key feature is the ability of relevant subject matter experts to integrate information into response operations. We propose the CBRNE medical operations science support expert as a professional who (1) understands that CBRNE incidents require an integrated systems approach, (2) understands the key functions and contributions of CBRNE science practitioners, (3) helps direct strategic and tactical CBRNE planning and responses through first-hand experience, and (4) provides advice to senior decision-makers managing response activities. Recognition of both CBRNE science as a distinct competency and the establishment of the CBRNE medical operations science support expert informs the public of the enormous progress made, broadcasts opportunities for new talent, and enhances the sophistication and analytic expertise of senior managers planning for and responding to CBRNE incidents.
Introduction: Early team experiences can influence the professional trajectories of early-career investigators profoundly, yet they remain underexplored in the team science literature, which has focused primarily on large, multisite teams led by established researchers. To better understand the unique challenges of teams led by early-career investigators, we conducted a qualitative pilot study.
Methods: Interviews were conducted with the principal investigator and members of 5 teams led by KL2 and K12 scholars at the University of Pittsburgh. A code book was developed and thematic analysis was conducted.
Results: Seven distinct themes emerged. Interview subjects reported a high level of trust and strong communication patterns on their teams; however, the data also suggested underlying tensions that have the potential to escalate into larger problems if unaddressed.
Conclusions: This study yields a deeper understanding of teams led by early-career investigators, which can help us provide appropriately targeted training and support.
OBJECTIVES/SPECIFIC AIMS: Explain the difference between creative and critical thinking. Practice and enhance the critical thinking skills. Display innovative thinking through creative solutions and insights. Critically evaluate evidence in research. Think imaginatively, actively seeking out new points of view. METHODS/STUDY POPULATION: Offer an online course in Critical and Creative Thinking to junior researchers to improve their capacity to think and transforms their ideas in research questions and aims that bring new option to the field of clinical and translational research. Evaluate their improvement through evaluation forms and exercises that show their process to think imaginatively. RESULTS/ANTICIPATED RESULTS: The Scholars will understood the importance of critical and creative thinking in their careers, believed they could apply the insights and knowledge from the course in their grant and paper writing, recognized that they don’t always consider if they are being critical or creative in their thinking and actions. DISCUSSION/SIGNIFICANCE OF IMPACT: The course helped the participants to improve their capacity to think and saw a need to develop a more systematic thought processes in their life and work. The junior research will understand the difference between opinion, reasoned, judgment and fact and they will be able to judge the credibility of an information source using criteria such as authorship, currency and potential bias that can improve their grant submission and scientific writing skills.
Little has been published about competency-based education in academic medicine, in particular how competencies are or should be assessed. This paper re-examines a competency-based assessment for M.S. students in clinical research, and “assesses the assessment” 4 years into its implementation.
Data were gathered from student surveys and interviews with program advisors, and common themes were identified. We then made refinements to the assessment, and student surveys were administered to evaluate the impact of the changes.
Research results suggested the need to improve communication, time the assessment to align with skills development and opportunities for planning, streamline, and clarify expectations with examples and templates. After implementing these changes, data suggest that student satisfaction has improved without any reduction in academic rigor.
The effective implementation of competency-based training in clinical and translational research requires the development of a scholarly literature on effective methods of assessment. This paper contributes to that nascent body of research.