Hostname: page-component-76fb5796d-2lccl Total loading time: 0 Render date: 2024-04-26T05:24:57.845Z Has data issue: false hasContentIssue false

3347 Developing Relevant Community Engagement Metrics to Evaluate Engagement Support and Outcomes

Published online by Cambridge University Press:  26 March 2019

Grisel M. Robles-Schrader
Affiliation:
Northwestern University
Keith A Herzog
Affiliation:
Northwestern University
Josefina Serrato
Affiliation:
Northwestern University
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

OBJECTIVES/SPECIFIC AIMS: The goals in this project were two-fold:. Develop metrics that assessed community engagement support the center provides, and. Systematically document the fluid and time-intensive nature of providing community engaged research support, as well as key outcomes. METHODS/STUDY POPULATION: The CCH utilized REDCap software in combination with Excel, to create and implement a data collection system to monitor and report on the full spectrum of engagement activities offered by the center. Center staff collaborated in identifying relevant metrics, developing the data collection instruments, and beta-testing instruments with real examples. This facilitated the integration of contextual factors (defined as factors such as the history, size, and diversity of the community, the organizational mission, the structure and size of the CE team, the number of years a university has been supporting community-engaged research work, etc.). Taking a collaborative approach in developing the center’s evaluation plan offered the added benefit of facilitating staff/faculty buy-in, building staff capacity, and engaging the team in understanding concepts related to performance measurement versus management. RESULTS/ANTICIPATED RESULTS: Key benefits of these engagement tracking systems include: consolidating data into a central location, standardizing tracking processes and critical definitions, and supporting more automated reporting systems (e.g., dashboards) that facilitate quality improvement and highlight success stories. Data were compiled and reported via on-line dashboard (REDCap and Tableau) to help center leadership and staff analyze:. Quality improvement issues (How quickly are we responding to a request for support? Are we providing resources that meet the needs of community partners? Academics? Community-academic partnerships?);. Qualitative process analysis (In what research phase are we typically receiving requests for support (e.g. proposal development phase, implementation phase, etc.)? What types of projects are applying for seed grants? After the seed grant ends, are the community-academic partnerships continuing to partner on research activities?);. Outcomes (Are new partnerships stemming from our support? Are supported research projects leading to new policies, practices, programs?). DISCUSSION/SIGNIFICANCE OF IMPACT: There is a gap in the literature regarding meaningful, actionable, and feasible community engaged metrics that capture critical processes and outcomes. This project identified many more relevant metrics and demonstrates that it is worthwhile to take a collaborative, inclusive approach to identifying, tracking, and reporting on key process and outcome metrics in order to convey a more comprehensive picture of community engagement activities and to inform continuous improvement efforts. Community engagement centers across CTSIs offer a similar range of programs and services. At the same time, much of the community-engaged research literature describes metrics related to community-academic grant submissions, funds awarded, and peer-reviewed publications. Experts that work in the arena of providing community engagement support recognize that these metrics are sufficient in understanding the spectrum of engagement opportunities. Community engagement (CE) teams nationally can utilize these metrics in developing their evaluation infrastructure. At the national level, NCATS can utilize the metrics for CE common metrics related to these programs and services. Critical to this process:. Leveraging resources that will facilitate collecting generalizable data (national metrics) while allowing sites to continue collecting nuanced data (local programs and services). Gathering input from CE teams, stakeholders, and researchers to further refine these metrics and data collection methods. Utilizing REDCap, Tableau and other resources that can facilitate data collection and analysis efforts.

Type
Health Equity & Community Engagement
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-ncnd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Association for Clinical and Translational Science 2019