To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Clinical and translational research relies on a well-trained workforce, but mentorship programs designed expressly for this workforce are lacking. This paper presents the development of a mentoring program for research staff and identifies key programmatic outcomes. Research staff participating in this program were matched with a senior mentor. Focus groups were conducted to identify key program outcomes. Surveys were administered throughout the program period to assess participants’ experience, gains in skill, and subsequent careers. Analysis of the resultant qualitative and quantitative data are used to characterize the implementation and impact of the program. A total of 47 mentees and 30 mentors participated in program between 2018 and 2023. A comprehensive logic model of short-, intermediate- and long-term outcomes was developed. Participants reported positive valuations of every programmatic outcome assessed including their program experience, learning and research careers. The pool of available mentors also grew as new mentors were successfully recruited for each cohort. This mentorship program developed and implemented by senior research staff successfully provided junior research staff with professional development support, mentorship, and professional development opportunities. Junior and senior health research staff built mentoring relationships that advanced their clinical and translational research careers.
Community health workers and promotoras (CHW/Ps) have a fundamental role in facilitating research with communities. However, no national standard training exists as part of the CHW/P job role. We developed and evaluated a culturally- and linguistically tailored online research best practices course for CHW/Ps to meet this gap.
After the research best practices course was developed, we advertised the opportunity to CHW/Ps nationwide to complete the training online in English or Spanish. Following course completion, CHW/Ps received an online survey to rate their skills in community-engaged research and their perceptions of the course using Likert scales of agreement. A qualitative content analysis was conducted on open-ended response data.
104 CHW/Ps completed the English or Spanish course (n = 52 for each language; mean age 42 years SD ± 12); 88% of individuals identified as female and 56% identified as Hispanic, Latino, or Spaniard. 96%–100% of respondents reported improvement in various skills. Nearly all CHW/Ps (97%) agreed the course was relevant to their work, and 96% felt the training was useful. Qualitative themes related to working more effectively as a result of training included enhanced skills, increased resources, and building bridges between communities and researchers.
The CHW/P research best practices course was rated as useful and relevant by CHW/Ps, particularly for communicating about research with community members. This course can be a professional development resource for CHW/Ps and could serve as the foundation for a national standardized training on their role related to research best practices.
In 2017, the Michigan Institute for Clinical and Health Research (MICHR) and community partners in Flint, Michigan collaborated to launch a research funding program and evaluate the dynamics of those research partnerships receiving funding. While validated assessments for community-engaged research (CEnR) partnerships were available, the study team found none sufficiently relevant to conducting CEnR in the context of the work. MICHR faculty and staff along with community partners living and working in Flint used a community-based participatory research (CBPR) approach to develop and administer a locally relevant assessment of CEnR partnerships that were active in Flint in 2019 and 2021.
Surveys were administered each year to over a dozen partnerships funded by MICHR to evaluate how community and academic partners assessed the dynamics and impact of their study teams over time.
The results suggest that partners believed that their partnerships were engaging and highly impactful. Although many substantive differences between community and academic partners’ perceptions over time were identified, the most notable regarded the financial management of the partnerships.
This work contributes to the field of translational science by evaluating how the financial management of community-engaged health research partnerships in a locally relevant context of Flint can be associated with these teams’ scientific productivity and impact with national implications for CEnR. This work presents evaluation methods which can be used by clinical and translational research centers that strive to implement and measure their use of CBPR approaches.
OBJECTIVES/GOALS: To improve early career faculty members’ NIH grant writing skills, Clinical and Translational Science Awards (CTSA) hubs have developed a variety of workshop-style programs. However, few articles have evaluated the impact of grant writing workshops on NIH grant submission and award rates. METHODS/STUDY POPULATION: The K Writing program was developed by the Michigan Institute for Clinical and Health Research (MICHR) at the University of Michigan. Since 2012, 435 scholars have participated in the program. The MICHR K Writing program is a three-part workshop series that prepares scholars by providing them with guidelines to write all sections of a career development grant application. Each session focuses on different sections of the K award proposal. During the workshop sessions, participants break into small groups and exchange drafts of their proposal sections and receive peer critique and feedback from senior faculty facilitators who have experience with NIH study sections. RESULTS/ANTICIPATED RESULTS: Between 2012-2018, 273 scholars participated and 57% were female. Our two primary outcomes of interest are submission rates and success rates (the number of grants awarded divided by the number of applications). We plan to examine the effects of several characteristics, including number of sessions attended, cohort year, and faculty vs. postdoctoral status. We will also examine whether there were differences in submission and success rates between female and male researchers and between underrepresented minority scholars and those who identified as white or Asian. Lastly, we will report submission and success rates for each grant mechanism and compare them to the national averages. DISCUSSION/SIGNIFICANCE: Obtaining external research funding is an important part of a faculty career, especially at its early stages. This research has important implications for the design of similar programs intended to increase submission and success rates for federal grant applications.
OBJECTIVES/GOALS: There are few training programs for health research staff and clinicians like The Practice-Oriented Research Training program, that include opportunities to conduct funded clinical and translational research. The goal of this study is to evaluate the long-term impact of this program on participants professional development and advancement. METHODS/STUDY POPULATION: The Practice-Oriented Research Training program for health research staff and clinicians was operated by the Michigan Institute for Clinical and Health Research from 2008 through 2018. Participants received training and formed teams that received financial support to conduct a clinical or translational study with a faculty mentor. Eleven cohorts comprising 111 individuals participated. The long-term impact of the program was evaluated using sequential mixed methods. All participants were invited to evaluate the program via an online survey in 2021. Respondents were invited to participate in interviews in 2022. Secondary records of the participants’ publications, grants, and professional advancement were collected. RESULTS/ANTICIPATED RESULTS: 68 participants in the PORT program published 345 papers in peer-reviewed scientific journals following the program, averaging over 5 publications per participant. These publications have been cited over 4000 times with an average of over 13 citations per paper. Large proportion of program participants have continued contributing to health research; the vast majority of program participation chose to continue at the University of Michigan. Survey results indicate participants’ belief that the program had wide-ranging and enduring impacts on key aspects of their careers, including their application of research to practice. Interviews confirmed that the program helped many participants make substantial advancement in their careers. DISCUSSION/SIGNIFICANCE: Training programs for health research staff and clinicians can have a substantial and enduring impact on their professional development and advancement. The need for programs like PORT will increase as the health research workforce grows. These results inform recommendations for translational scientists.
OBJECTIVES/GOALS: This study evaluates the impact of an updated and expanded training for social and behavioral health researchers. Participants’ experience with training modules focused on community engagement is a focus of this evaluation as is the application of this training by participants in teams. METHODS/STUDY POPULATION: The Social and Behavioral Research training series for health researchers and team members was first created by faculty and staff of the Michigan Institute for Clinical and Health Research in 2018. This training was updated and expanded in 2021 with support from the National Institutes of Health to include new material regarding community-engaged health research as well as updates concerning technology and new federal regulations. Past participants of the training were invited to retake the training, as were clinical and translational researchers at University of Michigan who were new to the training. Surveys were sent to all participants after completing the training and a focus group of research staff was conducted to identify how they utilized the training in support of their teams. RESULTS/ANTICIPATED RESULTS: It is anticipated that at least 100 individuals will participate in the evaluation of the Social and Behavioral Research training between October 2022 and February 2023. Data extracted from U-M’s learning management platform will demonstrate how well participants performed on key knowledge checks embedded in the modules and how quickly they progressed through the sections of the training. These results will be compared to benchmarks derived from evaluations of the prior course which were conducted in 2018. A focus group of at least 10 individuals will demonstrate how health research staff utilized the training and associated resources to advance the scientific work of their study teams. DISCUSSION/SIGNIFICANCE: Comprehensive training programs for research best practices in social and behavioral health need to have tailored and up-to-date information for this group of researchers and staff. The results of this evaluation will demonstrate how this program contributed to the professional development of the health research workforce.
OBJECTIVES/GOALS: There is a critical need to provide quality training for study teams. Materials need to be flexible and available at the learner’s preferred time and format. DIAMOND was developed to provide nimble education offerings that respond to the changing landscape of clinical and translational research. METHODS/STUDY POPULATION: In 2018 four CTSA institutions (the University of Michigan, the Ohio State University, Rochester University, and Tufts University) collaborated to launch the DIAMOND portal. Developed as a CTSA-wide web-based platform, DIAMOND allows members of clinical and translational research teams to widely share and access training and education resources. In 2022 a MICHR-led update of DIAMOND used principles of user centered design to improve the platform. New features include updated search functions to quickly find and sort training materials, tagging training materials to the characteristics of a translational scientist, and development of user controlled customized playlists. RESULTS/ANTICIPATED RESULTS: DIAMOND currently includes 217 training resources developed by 30 CTSA hubs and private industry. The platform has over 600 page views per day from users across the U.S. and internationally. DIAMOND includes an easy-to-use form to upload new materials to the platform. Contributors are asked to include key words and select competency domains and characteristics of a translational scientist that apply to their materials. Other new features include tagging materials to streamline and improve search results, the ability to sort materials by competency domain or characteristics of a translational scientist, and the ability for users to create and share customized, personalized playlists. DISCUSSION/SIGNIFICANCE: DIAMOND is an important tool to support workforce development for study teams. Updates to the flexible digital platform meet the needs and preferences of adult learners and busy health professionals. Lessons learned from the design process and future plans for the platform will be explored.
Community Health Workers and Promotoras (CHW/Ps) are valued for their role in helping to engage community members in research. CHW/Ps have traditionally received variable training in research fundamentals, including importance and promotion of research rigor to establish consistency in the methods used over time. Research best practices training exists for research professionals, but no standard training is provided as part of the CHW/P job role. To develop this CHW/P research best practices training, our team engaged English- and Spanish-speaking CHW/Ps to watch an early version of an online module and to examine perceptions of the relevance of such a training and optimal delivery methods.
Six virtual focus group discussions were conducted (three in English and three in Spanish) across different US geographic regions with currently employed CHW/Ps.
Forty CHW/Ps participated (95% female, mean age 44 years, 58% identifying as Hispanic/Latino). Four themes emerged: relevance of training, benefits of providing a certificate of completion, flexible training delivery modalities, and peer-led training.
With participation from representatives of the intended learner group of CHW/Ps, our team found that CHW/Ps valued learning about research best practices. They perceived culturally- and linguistically appropriate health research training to be highly relevant to their role, particularly for communicating key information to community members about their participation in health research. Additionally, participants provided input on effective dissemination of the training including the benefit of having proof of course completion, involvement of peer trainers, and value of providing the option to participate in online training.
COVID-19 reinforced the need for effective leadership and administration within Clinical and Translational Science Award (CTSA) program hubs in response to a public health crisis. The speed, scale, and persistent evolution of the pandemic forced CTSA hubs to act quickly and remain nimble. The switch to virtual environments paired with supporting program operations, while ensuring the safety and well-being of their team, highlight the critical support role provided by leadership and administration. The pandemic also illustrated the value of emergency planning in supporting organizations’ ability to quickly pivot and adapt. Lessons learned from the pandemic and from other cases of adaptive capacity and preparedness can aid program hubs in promoting and sustaining the overall capabilities of their organizations to prepare for future events.
Retrospective case studies of initiatives supported by the National Institutes of Health’s Clinical and Translational Science Award (CTSA) hubs can be used to identify facilitators and barriers of translational science. This case study investigates how a CTSA Expanded Access program adapted to changing FDA guidance issued in 2020 to support clinicians’ treatment of COVID-19 patients in Michigan. We studied how this program changed throughout the pandemic to support physicians’ requests for remdesivir, convalescent plasma, and other uses of unapproved drugs and novel medical devices. A protocol for retrospective translational science case studies of health interventions developed by CTSA evaluators was used for this case study. Data collection methods included seven interviews and a review of institutional data, peer-reviewed publications, news stories, and other public records. The barriers identified include evolving guidance, misalignment of organizational operations, and the complexity of the research infrastructure. The facilitators of translation include collaboration between research and care teams, increasing engagement with a broad network of supporters, and ongoing professional development for research staff. The findings of this case study can be used to inform future investigations of the principles underlying the translational process.
OBJECTIVES/GOALS: The objective of this evaluation is to show how the STEP.UP program promoted the professional development at Michigan Medicine by providing clinical and translational research staff an experienced research staff mentor in a structured 9-month program. METHODS/STUDY POPULATION: Participant and mentor data was collected from application forms, online surveys, and interviews with both participating mentors and mentees. Validated assessments of mentoring competencies were administered. Participants were tracked over a period of four years with regular reviews of institutional records. Mentor and mentor data was also collected at the point of application each year and the application forms were aligned with NIH definitions for underrepresented populations in science in 2020. As part of a process of continuous programmatic improvement, a STEP.UP Advisory Board consisting of senior research staff and past mentors was involved in the identification, operationalization and evaluation of programmatic outcomes and is involve din the ongoing governance of this mentoring program. RESULTS/ANTICIPATED RESULTS: Four cohorts of mentees and mentors have participated in this program since its inception. Mentees gained the greatest abilities in, Active listening, Establishing a relationship based on trust, Considering how personal and professional differences may impact expectations, and Working effectively with mentors/mentees whose personal background is different. Mentees reported the program contributed to their Career planning, Professional advancement, networking, personal growth, professional networks, and communication skills. Mentors reported learning about new professional techniques and areas of expertise. As of 2021, 75% the first cohort changed their job-classification since participating as did 25% of the second cohort and 100% of mentees have maintained research careers. DISCUSSION/SIGNIFICANCE: The creation of this program in 2019 marked the beginning of a novel professional development opportunity at Michigan Medicine. The evaluation results show how STEP.UP contributes to advancing clinical and translational study teams and how it can inform and the identification of best practices in clinical and translational workforce development.
OBJECTIVES/GOALS: The objective of this evaluation is to evaluate the long-term impact of the PORT program on the clinical and translational research careers of the participating research staff. The impact of the program is best demonstrated through measures of the scientific contributions of the participants as well as their professional advancement over time. METHODS/STUDY POPULATION: The PORT program participants were tracked through the collection of instructional and public records, including the collection of their subsequent grant and publications. The clinical and translational research careers of the participants was also assessed, using a measure adapted from the operational guidelines for NCATS’ Research Careers Common Metric. A survey was administered to part participants and interviews conducted with participants from the past cohorts. RESULTS/ANTICIPATED RESULTS: The evaluation results demonstrate the PORT program participants made substantial contributions to the advancement of clinical and translational research, particularly through their publication of hundreds of scientific works. In addition, the evaluation results reveal that the program had short-, intermediate- and long-term impact on their research careers, thereby contributing to the advancement of the health research workforce at the University of Michigan for well over a decade. Specific participant cases highlight how individuals utilized their experience and training to advance research agendas and their long-term careers at the institution. These findings can inform the development, implementation and evaluation of similar programs throughout the CTSA consortium and beyond. DISCUSSION/SIGNIFICANCE: Most evaluations of research training and award programs for clinical and translational research staff do not evaluate the long-term impact of CTSA support on the research careers of the participants. The findings of this evaluation can help inform the development of new and more effective workforce development initiatives with long-term impact.
OBJECTIVES/GOALS: Researchers include community health workers and promotoras (CHW/Ps) on research teams to increase community engagement; however, no formal training on research best practices exists for this group. Study objectives were to examine perceived relevance of a new culturally and linguistically appropriate CHW/P training and optimal delivery modes. METHODS/STUDY POPULATION: We conducted six focus groups (FGs), three each in English and Spanish, at three study sites, University of Florida, University of Michigan, and University of California Davis from February to August 2021. The CHWs/Ps were purposively selected to include diverse age, race/ethnicity, educational level, and work experience. Separate FGs were conducted for CHWs/Ps in English and Spanish as appropriate. All FGs were audio recorded, translated to English from Spanish, transcribed and analyzed using RADaR (Rigorous and Accelerated Data Reduction) technique. RESULTS/ANTICIPATED RESULTS: Forty CHWs/Ps (95% women, mean age 45) participated, with the majority (58%) identifying as Hispanic/Latino. Of the sample, most identified as White (50%) or Black (25%). The proposed training was mentioned as relevant and would help them to be confident, comfortable, knowledgeable and effective in the community. Online training, though advantageous due to its flexibility also reportedly had barriers such as internet access, computer availability and technological know-how of CHWs/Ps. A hybrid training approach, online plus peer-led, was recommended due to the importance of personal guidance by an experienced CHW/P’ especially for a newly recruited CHW/P. DISCUSSION/SIGNIFICANCE: Findings indicated that a culturally and linguistically appropriate CHW/P training that is flexible and easily accessible in its mode of delivery is relevant and useful. In-person guidance to a new CHW/P was reported as an important training component. Poster will include the detailed quotes on relevance, usefulness, and mode of delivery of training.
Monoclonal antibody therapeutics to treat coronavirus disease (COVID-19) have been authorized by the US Food and Drug Administration under Emergency Use Authorization (EUA). Many barriers exist when deploying a novel therapeutic during an ongoing pandemic, and it is critical to assess the needs of incorporating monoclonal antibody infusions into pandemic response activities. We examined the monoclonal antibody infusion site process during the COVID-19 pandemic and conducted a descriptive analysis using data from 3 sites at medical centers in the United States supported by the National Disaster Medical System. Monoclonal antibody implementation success factors included engagement with local medical providers, therapy batch preparation, placing the infusion center in proximity to emergency services, and creating procedures resilient to EUA changes. Infusion process challenges included confirming patient severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) positivity, strained staff, scheduling, and pharmacy coordination. Infusion sites are effective when integrated into pre-existing pandemic response ecosystems and can be implemented with limited staff and physical resources.
Although several initiatives have produced core competency domains for training the translational science workforce, training resources to help clinical research professionals advance these skills reside primarily within local departments or institutions. The Development, Implementation, and AssessMent of Novel Training in Domain (DIAMOND) project was designed to make this training more readily and publicly available. DIAMOND includes a digital portal to catalog publicly available educational resources and an ePortfolio to document professional development. DIAMOND is a nationally crowdsourced, federated, online catalog providing a platform for practitioners to find and share training and assessment materials. Contributors can share their own educational materials using a simple intake form that creates an electronic record; the portal enables users to browse or search this catalog of digital records and access the resources. Since September 2018, the portal has been visited more than 5,700 times and received over 280 contributions from professionals. The portal facilitates opportunities to connect and collaborate regarding future applications of these resources. Consequently, growing the collection and increasing numbers of both contributors and users remains a priority. Results from a small subset of users indicated over half accomplished their purpose for visiting the site, while qualitative results showed that users identified several benefits and helpful features of the ePortfolio.
There is a clear need to educate and train the clinical research workforce to conduct scientifically sound clinical research. Meeting this need requires the creation of tools to assess both an individual’s preparedness to function efficiently in the clinical research enterprise and tools to evaluate the quality and effectiveness of programs that are designed to educate and train clinical research professionals. Here we report the development and validation of a competency self-assessment entitled the Competency Index for Clinical Research Professionals, version II (CICRP-II).
CICRP-II was developed using data collected from clinical research coordinators (CRCs) participating in the “Development, Implementation and Assessment of Novel Training In Domain-Based Competencies” (DIAMOND) project at four clinical and translational science award (CTSA) hubs and partnering institutions.
An exploratory factor analysis (EFA) identified a two-factor structure: the first factor measures self-reported competence to perform Routine clinical research functions (e.g., good clinical practice regulations (GCPs)), while the second factor measures competence to perform Advanced clinical functions (e.g., global regulatory affairs). We demonstrate the between groups validity by comparing CRCs working in different research settings.
The excellent psychometric properties of CICRP-II and its ability to distinguish between experienced CRCs at research-intensive CTSA hubs and CRCs working in less-intensive community-based sites coupled with the simplicity of alternative methods for scoring respondents make it a valuable tool for gauging an individual’s perceived preparedness to function in the role of CRC as well as an equally valuable tool to evaluate the value and effectiveness of clinical research education and training programs.
OBJECTIVES/SPECIFIC AIMS: The purpose of this study was to summarize the existing literature on clinical research competencies and determine what competency assessments currently exist. We also wished to assess which competencies should be included in a research competency assessment tool and to evaluate the validity of current competency assessments. We also examined whether these competency assessments can be used for the purposes of formative and summative evaluation. METHODS/STUDY POPULATION: Prior to conducting our search of the literature, we first compiled a list of search terms (e.g., clinical, research, training, competencies) that could be used to locate articles. We then entered these search terms, in various combinations, on several relevant databases. We evaluated abstracts of the articles revealed by this search to determine whether they met three criteria. The first criterion was that the subjects of the article must be clinical investigators or clinical investigators in training. Relevant disciplines included medicine, public health, nursing, pharmacy, dentistry, and other related fields. The second criterion was that articles should focus on research-based (as opposed to clinical) skills. The last criterion was that research-based competencies (or related terms like skills, abilities, mastery, knowledge) must be assessed in some way. If the abstract suggested that the article met all three criteria, the full article was retrieved and analyzed in-depth. To identify articles that eluded literature search, we then examined the reference section of these articles and examined articles that cited these articles. When no additional articles could be located, the search for articles stopped. Once a pool of potentially eligible articles was identified, the articles underwent peer review by several researchers experienced with clinical research and competency-based education and assessment. Articles that were unanimously judged to meet the criteria were included in the systematic review. RESULTS/ANTICIPATED RESULTS: Approximately 75 articles were selected and reviewed for eligibility. After peer review, we found that only a small fraction of these articles met our criteria for inclusion in the systematic literature review. Our preliminary findings suggest that there are few assessments of clinical research competency and that many of these assessments are poorly validated. DISCUSSION/SIGNIFICANCE OF IMPACT: The findings of the present study suggest that the validation methods used thus far are limited and so the validity of many of these assessments is effectively unproven. Future research on assessments of clinical research competency ought to address these limitations by sampling clinical researchers, using more rigorous validation methods, and by confirming hypothesized factor structures in new samples. The use of better-validated instruments may enhance measurement of trainees’ knowledge and skill levels for the purposes of formative and summative assessment.
OBJECTIVES/SPECIFIC AIMS: The DIAMOND project encourages study team workforce development through the creation of a digital learning space that brings together resources from across the CTSA consortium. This allows for widespread access to and dissemination of training and assessment materials. DIAMOND also includes access to an ePortfolio that encourages CRPs to define career goals and document professional skills and training. METHODS/STUDY POPULATION: Four CTSA institutions (the University of Michigan, the Ohio State University, University of Rochester, and Tufts CTSI) collaborated to develop and implement the DIAMOND portal. The platform is structured around eight competency domains, making it easy for users to search for research training and assessment materials. Contributors can upload links to (and meta-data about) training and assessment materials from their institutions, allowing resources to be widely disseminated through the DIAMOND platform. Detailed information about materials included in DIAMOND is collected through an easy to use submission form. DIAMOND also includes an ePortfolio designed for CRPs. This encourages workforce development by providing a tool for self-assessment of clinical research skills, allowing users to showcase evidence of experience, training and education, and fosters professional connections. RESULTS/ANTICIPATED RESULTS: To date, more than 100 items have been posted to DIAMOND from nine contributors. In the first 30 days there were 229 active users with more than 500 page views from across the U.S. as well as China and India. Training materials were viewed most often from four competency domains: 1) Scientific Concepts & Research Design, 2) Clinical Study Operations, 3) Ethical & Participant Safety, and 4) Leadership & Professionalism. Additionally, over 100 CRPs have created a DIAMOND ePortfolio account, using the platform to document skills, connect with each other, and search for internships and job opportunities. DISCUSSION/SIGNIFICANCE OF IMPACT: Lessons learned during development of the DIAMOND digital platform include defining relevant information to collect for the best user experience; selection of a standardized, user-friendly digital platform; and integration of the digital network and ePortfolio. Combined, the DIAMOND portal and ePortfolio provide a professional development platform for clinical research professionals to contribute, access, and benefit from training and assessment opportunities relevant to workforce development and their individual career development needs.
Research shows incentives can motivate faculty to increase their engagement in mentoring, despite a myriad of institutional barriers. One such incentive may be the implementation of a university-wide mentor award program to promote a culture of mentorship.
A new mentorship award was created at a research-intensive university and faculty recipients were surveyed to assess their perceptions of the award’s impact on their mentoring practices and career.
Sixty-two percent of awardees (n=21) completed the survey and felt the recognition incentivized them to engage in further mentoring and participate in formal mentorship training. Most awardees referenced the award in their CVs, performance evaluations, and grant proposals. Additionally, they felt the award effectively promoted mentoring among the broader faculty community.
Growth of clinical and translational research depends in part on the mentorship received by early career faculty. Therefore, other research universities may benefit from implementing such awards.
OBJECTIVES/SPECIFIC AIMS: As the sole Clinical and Translational Science Award (CTSA) site in Michigan, the Michigan Institute for Clinical & Health Research (MICHR) at the University of Michigan (UM) is working to develop community networks that drive clinical and translational research on community-identified health priorities. METHODS/STUDY POPULATION: These CBRNs will be modeled from successful work that has been accomplished in Jackson, MI where stakeholders from the local healthcare community, County Health Department, Health Improvement Organization, and grassroots community members created a Community of Solution to address the unmet behavioral health and social needs of community members. The CBRN’s will focus on identifying community health priorities by receiving input from community members in underserved communities using deliberative software called Choosing All Together (CHAT). RESULTS/ANTICIPATED RESULTS: In the fall of 2017, 3 focus groups were held in Northern Michigan to identify community health priorities. The top 5 community health priorities include; (1) mental wellness, (2) long-term illness, (3) alcohol and drugs, (4) air, water, and land, and (5) affording care. Additional focus groups are scheduled for the winter in 2 additional geographic areas. DISCUSSION/SIGNIFICANCE OF IMPACT: Future work for the creation of CBRNs includes building leadership groups comprised of clinicians, community leaders, public health leaders, health system leaders and researchers to inform the leadership groups of community-identified health priorities. In addition, the team is working to identify a platform to connect academic investigators across UM and community partners on shared research priorities in real time. In order to measure and map relationships within the networks, we are planning to utilize Social Network Analysis as an evaluation tool.