Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-27T01:04:01.723Z Has data issue: false hasContentIssue false

A randomized implementation trial to increase adoption of evidence-informed consent practices

Published online by Cambridge University Press:  14 December 2022

Erin D. Solomon
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Jessica Mozersky
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Melody Goodman
Affiliation:
School of Global Public Health, New York University, New York, NY, USA
Meredith V. Parsons
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Kari A. Baldwin
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Annie B. Friedrich
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Jenine K. Harris
Affiliation:
George Warren Brown School of Social Work, Washington University, St. Louis, MO, USA
James M. DuBois*
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
*
Address for correspondence: J. M. DuBois, DSc, PhD, Bioethics Research Center, Division of General Medical Sciences, Department of Medicine, Washington University School of Medicine, 4523 Clayton Avenue, Box 8005, St. Louis, MO 63110, USA. Email: duboisjm@wustl.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

Several evidence-informed consent practices (ECPs) have been shown to improve informed consent in clinical trials but are not routinely used. These include optimizing consent formatting, using plain language, using validated instruments to assess understanding, and involving legally authorized representatives when appropriate. We hypothesized that participants receiving an implementation science toolkit and a social media push would have increased adoption of ECPs and other outcomes.

Methods:

We conducted a 1-year trial with clinical research professionals in the USA (n = 1284) who have trials open to older adults or focus on Alzheimer’s disease. We randomized participants to receive information on ECPs via receiving a toolkit with a social media push (intervention) or receiving an online learning module (active control). Participants completed a baseline survey and a follow-up survey after 1 year. A subset of participants was interviewed (n = 43).

Results:

Participants who engaged more with the toolkit were more likely to have tried to implement an ECP during the trial than participants less engaged with the toolkit or the active control group. However, there were no significant differences in the adoption of ECPs, intention to adopt, or positive attitudes. Participants reported the toolkit and social media push were satisfactory, and participating increased their awareness of ECPs. However, they reported lacking the time needed to engage with the toolkit more fully.

Conclusions:

Using an implementation science approach to increase the use of ECPs was only modestly successful. Data suggest that having institutional review boards recommend or require ECPs may be an effective way to increase their use.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Introduction

Since the early 20th century, informed consent has been recognized as a cornerstone of ethical clinical research, and it is mentioned in every major international code of ethics [Reference Faden and Beauchamp1,Reference Emanuel, Wendler and Grady2]. Nevertheless, studies consistently find that research participants do not understand key information about studies, that researchers overestimate participants’ understanding of trial information, and that individuals with cognitive impairments are routinely excluded from research participation without first seeking permission from a legally authorized representative [Reference Montalvo and Larson3Reference Prusaczyk, Cherney, Carpenter and DuBois8]. There are evidence-based practices for informed consent that have been demonstrated to improve participants’ understanding of information and exercise of autonomy, but they are not routinely used [Reference Mozersky, Wroblewski, Solomon and DuBois9Reference Solomon, Mozersky and Baldwin13]. In general, familiarity with a practice alone does not lead to adoption of a practice [Reference Brothers, Carpenter and Shelby14]. Individuals need to view the practice as important, know how to implement it, have the resources needed, and overcome barriers to use [Reference Damschroder, Aron, Keith, Kirsh, Alexander and Lowery15Reference Godin, Belanger-Gravel, Eccles and Grimshaw17].

We conducted an implementation trial with the aim of increasing adoption of four evidence-informed consent practices (ECPs).

Practice 1: Optimizing consent document formatting. This practice involves utilizing bullet points, headings, 12-point font or larger, and generous white space [18,19].

Practice 2: Using plain language in consent documents. We promoted using simple words, short sentences, and an active voice [18,19]. Because institutions frequently require the use of template language in informed consent forms, we focused on the more recent “key information” section of consent documents required by federal regulations, which provide investigators with more discretion [Reference Mozersky, Wroblewski, Solomon and DuBois9].

Practice 3: Using a validated instrument to assess participants’ understanding and appreciation of consent information. In this context, understanding is the ability to know the meaning of the information being presented, and appreciation involves believing the information and understanding how it is relevant to oneself [Reference Faden and Beauchamp1]. A validated instrument is one that has been evaluated and found to be reliable and to measure what it’s intended to measure. We selected a specific validated assessment to promote, which can be used in 5 min with any clinical trial and provides cutoff scores [Reference Jeste, Palmer and Appelbaum5].

Practice 4: Involving legally authorized representatives (LARs) when participants lack or are likely to lose decision-making capacity during a clinical trial. We focused on navigating legal issues, identifying appropriate individuals, and documenting participant wishes [Reference Prusaczyk, Cherney, Carpenter and DuBois8,Reference Kim, Kim, McCallum and Tariot20,Reference Kim, Karlawish, Kim, Wall, Bozoki and Appelbaum21].

In randomized trials, the first three practices have been associated with significantly improved understanding and appreciation of consent information [Reference Montalvo and Larson3,Reference Jeste, Palmer and Appelbaum5,18,19,Reference Agre, Campbell and Goldman22Reference Campbell, Goldman, Boccia and Skinner30], and the fourth practice has been demonstrated as feasible and consistent with participant wishes [Reference Prusaczyk, Cherney, Carpenter and DuBois8,Reference Kim, Kim, McCallum and Tariot20,Reference Kim, Karlawish, Kim, Wall, Bozoki and Appelbaum21]. Our previous research showed that clinical research professionals self-reported optimizing formatting in 42% of their key information sections and using plain language in 63% of their key information sections [Reference Solomon, Mozersky and Wroblewski12]. In the same sample, 44% of participants self-reported using a validated assessment of consent [Reference Solomon, Mozersky and Baldwin13].

Implementation science aims to bring evidence-informed research findings into routine practice [Reference Eccles and Mittman31]. Our review of the literature identified no systematic efforts to increase the uptake of ECPs. We used the Consolidated Framework for Implementation Research (CFIR) to guide an implementation trial focused on improving the adoption of ECPs. CFIR is a typology designed to understand and facilitate implementation and consists of five domains: the inner setting, the outer setting, the characteristics of the intervention, the process of implementation, and the characteristics of the individuals involved [Reference Damschroder, Aron, Keith, Kirsh, Alexander and Lowery15]. Through literature reviews, a quantitative baseline survey, and qualitative interviews with stakeholders, we identified potential barriers and facilitators of adopting ECPs within each of the five CFIR domains. These included (1) local institutional review boards (IRBs), institutional factors, characteristics of the research team, and burden associated with changing consent processes (CFIR inner setting); (2) independent IRBs and study sponsors (CFIR outer setting); (3) educational materials addressing ECPs (CFIR intervention characteristics); (4) means of distributing interventions that promote ECPs (CFIR process); (5) and attitudes, knowledge, and experience of principal investigators (PIs) and clinical research coordinators (CRCs) who lead clinical trial consent processes (CFIR individuals) [Reference Solomon, Mozersky and Wroblewski12,Reference Solomon, Mozersky and Baldwin13,Reference Damschroder, Aron, Keith, Kirsh, Alexander and Lowery15]. Implementation trials are more successful when they address barriers in these domains, promote adoption of practices by pushing information to users, provide evidence in support of practices, involve credible messengers, and repeat exposure to messages [Reference Brownson, Colditz and Proctor32Reference Gifford, Holloway and Frankel35].

We conducted a 1-year randomized trial to test the effectiveness of an intervention informed by implementation science principles versus standard ethics training for improving the use of ECPs. We initiated the trial in August 2020, the first year of the COVID-19 pandemic. The current standard for research ethics training is text-based online learning modules that include brief quizzes [Reference Hadden, Prince, James, Holland and Trudeau36]. These modules typically do not utilize effective behavior change methods, do not offer support or tools, and do not address barriers. We hypothesized that an alternative intervention designed using an implementation science approach would increase adoption of ECPs (primary outcome) and intention to adopt and positive attitudes toward ECPs (secondary outcomes).

Method

We utilized a mixed method design. We administered a quantitative survey to participants at baseline and again at the conclusion of the trial and conducted post-trial interviews with a subset of intervention group (IG) participants. The survey was used to test the effect of our intervention on our outcomes and examine engagement with the intervention, while the interviews were used to gather more nuanced information on the effect of being in the intervention. We used the CFIR model to guide the development of the toolkit, social media push, survey instrument, and interview guide. We used an active control group (who received an online learning module focused on ECPs) instead of a true control group because online training via modules is currently the standard approach to ethics training, and we received feedback that all participants would want some information on ECPs given recent updates to the Common Rule for human subjects protection [37]. This research was approved by the Washington University in St. Louis IRB (#201909154).

Trial Procedure

Figure 1 provides an overview of the study procedures. Participants provided their consent and completed a baseline survey via Qualtrics. Although our intervention and outcome assessment targeted individual behavior, we used cluster (institution level) stratified (institutions with ≥ 10 people enrolled, institutions with < 10 people enrolled) 2:1 randomization to assign all participants at the same institution to the intervention (n = 872) or active control (n = 412). Randomizing by institution allowed participants to share our materials with their team while reducing potential contamination between groups. The active control group was provided with a link to the online learning module, which contained the same information on ECPs as the toolkit but presented in text with bullets, tables, and figures, and they were encouraged to complete it. They received a reminder email a few weeks later. The IG participants were sent a link to the toolkit and notified of the upcoming social media push. When participants clicked on the toolkit link, they were asked how they would like to receive communications for the social media push: via listserv emails, joining one or more of our private social media groups (i.e., Facebook, LinkedIn, or Twitter), or both. IG participants received communications throughout the year. After 1 year, all participants completed the follow-up survey. A subset of IG participants who completed the follow-up survey then completed a post-trial interview. The trial ended after achieving a high response rate on the follow-up survey and after completing all post-trial interviews.

Fig. 1. Overview of trial procedures.

Materials

Intervention: Toolkit and Social Media Push

A toolkit is an action-oriented collection of information and resources to help individuals adopt practices [38]. The toolkit we created, ConsentTools.org, contains the following resources to help researchers adopt ECPs (see Fig. 2): seven brief videos describing ECPs and practical steps for adopting them; a document illustrating good and unacceptable formatting and readability; a validated assessment of understanding, the UBACC [Reference Jeste, Palmer and Appelbaum5], with scoring instructions; documents to support the use of LARs; and template language to use in IRB submissions to justify the use of each practice. The toolkit also contains information about the study and project team, a discussion board, and a frequently asked questions page.

Fig. 2. ConsentTools.org website map. Note. LAR = legally authorized representative. UBACC = University of California, San Diego Brief Assessment of Capacity to Consent.

Of the 282 participants who completed the brief survey of how they wanted to receive communications from our team, 81% indicated that they preferred receiving listserv emails. Therefore, we added all IG participants to our email listserv. We sent listserv emails in “sprints”: we sent 1 or 2 emails per week for a several weeks, followed by a break. Most sprints focused on one ECP. All emails were brief and provided links to the toolkit. One sprint provided the main teaching points of the toolkit in the body of the emails. Another offered a certificate of completion to those who watched all videos and completed a quiz. Additionally, we created private groups on Facebook, LinkedIn, and Twitter. These were created because preliminary work conducted prior to the study showed that clinical research organizations had a strong presence on social media. For example, the Association of Clinical Research Professionals (ACRP) had 9,400 followers on LinkedIn,17,800 on Facebook, and 4,500 on Twitter. The Society of Clinical Research Associates (SOCRA) had 8,100 followers on LinkedIn, 3,600 on Facebook, and 840 on Twitter. During sprints, we posted the same material to these social media groups as was in the listserv emails; however, we created “closed” groups, including only our participants who were randomized to the IG to prevent contamination.

The toolkit and social media push utilized implementation science principles. We pushed content to participants rather than relying on user pull [Reference Haynes, Holland and Cotoi33], delivered content in diverse formats (video, documents, emails) [Reference Cervero and Gaines34,Reference Gifford, Holloway and Frankel35], utilized formatting and plain language principles to maximize comprehension [18,19], and included testimonials from credible messengers to enhance the credibility of the toolkit [Reference Gifford, Holloway and Frankel35].

Active Control: Online Learning Module

The online learning module was provided to our control group as an active control. It was modeled after standard online research ethics trainings. This module contained the same content on ECPs as the toolkit, presented in text with bullets, tables, and figures, and included a five-question quiz. It did not include any of the videos, tools, or resources to facilitate implementation from the toolkit.

Survey Development

The survey was developed by a team of PhD-level experts in the fields of research ethics and survey design [Reference DuBois and Antes39Reference English, Antes, Baldwin and DuBois42]. We iteratively revised the survey with input from PIs, CRCs, and IRB members. For some items, we wrote a PI version and a CRC version to reflect their unique roles (e.g., conducting versus supporting a trial). We conducted cognitive interviews (n = 8) to evaluate items for clarity, interview experts in informed consent regulations, obtain consent, and design consent protocols. The same items were used in both the baseline and follow-up surveys.

Survey Measures

Adoption

Because we aimed to change the behavior of individuals, we assessed the number of clinical trial protocols that the participant changed to adopt a specific practice (e.g., by adding a validated assessment or changing the consent document formatting). Using the number of trials in which they could have adopted the practice as the denominator, we calculated an adoption percentage.

Attempted implementation for the first time during the trial

We asked participants “Did you attempt to implement any of these practices for the first time in the past year?” with a yes/no response option.

Intention to adopt

For each ECP, participants’ intention to adopt the practices was assessed with two items, “I intend to use this practice in the future” and “Using this practice is a high priority for me” (1 = yes, 0 = no). These items were added together, which resulted in scores that could range from 0 to 2.

Positive attitudes

For each ECP, participant’s attitudes were assessed with two five-point Likert-scale items asking how useful they think the practice is and how interested they are in improving their use of the practice. These items were added together, resulting in scores that could range from 2 to 10.

Impact of COVID-19

For each ECP, we asked “Has the COVID-19 pandemic affected your ability to use this practice?” with response options of “no,” “yes, it made it easier to use this practice,” or “yes, it made it more difficult to use this practice.”

Intervention engagement

We asked IG participants several items about engagement in the trial, e.g., “How did you perceive the number of emails?” and “Why did you not join the social media groups?” Each item provided response options for participants to select from, and for most items participants could select all that applied.

Sharing and effective interventions

We asked participants in both groups “In the future, do you plan to share the [ConsentTools toolkit/online learning module]?” and “In your opinion, how effective would the following approaches be in getting your peers to use these practices?” (1 = not at all effective, 5 = extremely effective) with items including “your IRB strongly recommending these practices,” etc.

Demographics

Demographic questions included sex, age, race, and education. We asked additional questions about their work and the trials they worked on.

Qualitative Interview Approach

We asked about participants’ views on consent best practices, satisfaction with the trial process, levels of engagement during the trial, and ways of promoting ECPs. Four interviewers conducted semi-structured interviews with our interview guide. All interviews were conducted using Zoom, audio recorded, and professionally transcribed into notes.

Participants

We used Power Analysis and Sample Size (PASS 15) software to determine sample size [43]. We based the calculation on a 2 (intervention):1 (control) randomized intent to treat analysis and group sequential test (to account for the longitudinal study design) at the 5% significance level and aimed to detect differences of 1 Likert category or .10 proportion difference between groups for dichotomous outcomes [Reference Jennison and Turnbull44,Reference Zar45]. To achieve > 95% power on continuous outcomes required 552 participants (184 control and 368 intervention) and 773 participants for dichotomous outcomes (258 control and 515 intervention). We decided to oversample in anticipation of attrition.

Participants were recruited using two methods. First, we queried the clinicaltrials.gov database for interventional trials focused on Alzheimer’s disease and those open to older adults and sent our recruitment email to the study’s contact person. Second, recruitment email blasts were sent to members of the Association of Clinical Research Professionals (ACRP) and our recruitment messages were posted to ACRP social media accounts. Participants were screened to include only those who were CRCs or PIs working in the United States, expected to be involved in at least one clinical intervention trial opening in the next 18 months, and had at least one trial open to older adults (age 65+) or focused on Alzheimer’s disease or cognitive impairments. We focused on clinical trials open to older adults or focused on Alzheimer’s disease, given that these participants are at increased risk of cognitive impairments and have historically been excluded from trials [Reference Taylor, DeMers, Vig and Borson7,Reference Prusaczyk, Cherney, Carpenter and DuBois8]. Additional details on participant recruitment and the baseline survey findings can be found elsewhere [Reference Solomon, Mozersky and Wroblewski12,Reference Solomon, Mozersky and Baldwin13].

Survey sample

Participants enrolled in the trial by completing the baseline survey (n = 1284). At follow-up 1 year later, 1247 (97.1%) participants were still enrolled and were sent the follow-up survey. Nine emails were undeliverable, 21 withdrew from the study, and 71 were no longer eligible. This left n = 1146 (89.3%) for follow-up and 925 of them completed the survey (an 81% response rate). Fifty-nine responses were removed due to incompleteness, completing the survey in less than 3 min, invalid responses, or completing the payment form multiple times. This left a sample of n = 864 (IG n = 585, active control group n = 279). Participants who completed the follow-up survey were provided a $30 Amazon eGift card. Demographics can be found in Table 1.

Table 1. Characteristics of the implementation trial participants

PI = Principal Investigator. CRC = Clinical Research Coordinator.

Note. a Indicates participants could have selected multiple responses. bMean number of new trials opened in the past year = 5.1 (SD = 6.0). Percentages may not add up to 100% due to rounding.

Interview sample

We used stratified sampling to recruit 7–12 IG participants from four groups: PIs who attempted to implement an ECP for the first time during the trial, PIs who did not, CRCs who attempted to implement an ECP for the first time during the trial, and CRCs who did not (total n = 43). All PIs willing to be interviewed were contacted, and a random selection of CRCs willing to be interviewed were contacted until recruitment goals were met. Interview participants were provided a $40 Amazon eGift card.

Data Analysis

First, we examined engagement in the intervention and conducted logistic regression and Pearson’s Chi-squared analyses to determine whether the survey outcomes differed among the intervention and control groups. Because we had relatively low engagement in the intervention, we then identified IG participants who engaged in some of the trial activities to create an “intervention engager” group (n = 248). This group was defined as participants who had engaged in any of the following: a) earned our certificate of completion by viewing educational videos and passing a quiz; b) opened at least four of seven of our education emails (the subset of listserv emails that contained the teaching points of the toolkit); or c) clicked on links in our emails at least three times over the course of the trial (clicking on links demonstrated interest and took them to the toolkit). We then used multinomial logistic regression and Pearson’s Chi-squared analyses to determine whether survey outcome measures differed at follow-up among three groups: intervention engagers, intervention nonengagers, and active control group participants. We used SPSS version 26 and Stata version 17 to analyze the survey data. There was no missing data because the survey items were forced choice.

We used Dedoose qualitative data analysis software to code the interview transcript notes. We used a combination of deductive and inductive structural coding of responses to our semi-structured interview guide [Reference Saldaña46]. To ensure the trustworthiness of coding, we double-coded 25% of transcripts and resolved any discrepancies through discussion or clarifications of the codebook.

Results

Engagement in Intervention Activities

Listserv emails were opened by an average of 23% of IG participants per email, and an average of 1% clicked links in the emails to view the toolkit. Seven percent of participants opted to join any of the social media groups. After the first “sprint,” we discontinued posting to Twitter due to having very few followers.

Survey Results

The majority of participants reported that the COVID-19 pandemic did not affect their ability to use any of the practices. Specifically, 78% reported it did not affect their ability to use the formatting practice, 84% for plain language, 81% for validated assessments of consent, and 80% for LARs.

For the two-group analysis (IG vs. control group), there were no significant differences on our main outcome variable of adoption for any of the four ECPs, and very few significant differences between the two groups on intention to adopt or positive attitudes. There were no significant differences between the two groups for attempting to implement an ECP for the first time in the past year.

For the three-group analysis, comparing intervention engagers, intervention nonengagers, and control group participants, there were also no significant differences in adoption for any of the four ECPs (see Table 2). Similarly, there were very few significant differences between the three groups on intention to adopt or positive attitudes.

Table 2. Implementation trial results for both primary and secondary outcomes at the one-year follow-up survey

Note 1. Bold indicates statistically significant finding. Adoption rate represents the percentage of trial protocols in which they personally adopted the practice. Intention to adopt was assessed with two items asking whether they intend to use the practice in the future and whether using the practice was a high priority (1 = yes, 0 = no). The two items were added together resulting in intention to adopt scores that could range from 0 to 2. Positive attitudes were assessed with two 5-point Likert-scale items asking how useful they think the practice is and how interested they are in improving their use of the practice (1 = not at all useful/interested, 5 = extremely useful/interested). The two items were added together resulting in positive attitudes scores that could range from 2 to 10. LAR = legally authorized representative.

Note 2. Results for the two-group analysis (intervention vs. control) were nearly identical to the three-group analyses presented in the table. The only differences in the two-group analyses from what is presented in the table were that the control group had significantly higher positive attitudes than the intervention group on plain language (p = 0.03), and there was no significant difference between the intervention and control groups on the percentage who tried to implement one of the practices for the first time in the past year (p = 0.48)

Note 3. We considered whether investigator-initiated trials would be more likely to adopt ECPs, given that they develop their own consent processes (compared to sponsored trials in which informed consent documents and procedures are provided). Having more investigator-initiated trials had small, but significant, associations with adoption rates for each ECP. However, when included in the regression analyses as a control variable, there were no significant differences between groups on adoption of ECPs.

However, IG engagers reported significantly higher rates of attempting to implement an ECP for the first time in the past year. Forty-three percent of IG engagers attempted to implement an ECP for the first time in the past year, compared to 24% of IG nonengagers and 30% of the active control group participants, Pearson X 2 (2) = 21.1, p < 0.001.

We included items in the survey to examine why engagement was low (see Table 3). The most common reasons reported for low engagement were that they were too busy to access the toolkit (46%), that they were satisfied with their current consent practices (24%), and that their institution provided the necessary training on consent (17%). Participants reported not joining social media groups because they do not use social media for work purposes (35%) or prefer to receive work-related information via email (34%). The majority of participants (77%) thought we sent the right amount of listserv emails (not too many or too few) and wanted to share the toolkit with colleagues. When asked about approaches that would be effective in getting their peers to use EBCPs, participants highly endorsed having their IRB strongly recommend the practices.

Table 3. Follow-up survey results on why the implementation trial engagement was low

Note. Bold indicates statistically significant finding. Participants could select all that applied for the first four questions, excluding those who selected any “not applicable” or “no” responses. For the fifth question, participants rated each response option on a 1–5 Likert scale (1 = not at all effective, 5 = extremely effective). Chi square values marked as n/a are because the assumptions of the test are violated due to at least one cell having an expected count less than 5. IRB = Institutional Review Board.

Interview Results

Key interview findings are reported in Table 4.

Table 4. Post-Trial interview questions and results

ECP = evidence-informed consent practice. LAR = legally authorized representatives. IRB = institutional review board.

Note.

a Participant responses could have more than one code applied; thus, percentages may be more than 100% when summed.

b Only 7% of participants joined our social media groups, which is why there are so few responses about social media.

Perceived consent best practices

At the start of each interview, we asked participants to tell us what they thought were best practices for informed consent to better understand what they personally perceived to be best practices. Most participants (91%) reported reviewing the required elements of consent with participants as a consent best practice. Other best practices commonly reported included informally assessing participant understanding (e.g., teachback, informal questions assessing understanding; 58%), using plain language in consent documents or verbally during the consent discussion (49%), and communicating with participants ahead of the consent visit (40%). Aside from plain language, very few participants reported the three additional ECPs in our trial as consent best practices. In particular, no participants reported using validated assessments of consent understanding as a best practice.

Study effect

Most participants (70%) reported experiencing increased awareness of ECPs because of being in the trial. A considerable minority (35%) reported that being in the trial either increased or improved their use of ECPs. Several participants also reported that being in the trial had no effect, either because they perceived barriers to implementing ECPs (23%), because they were already using ECPs (16%), or because they did not use the toolkit (14%).

Why engagement was low

Most participants indicated that people were too busy or lacked time to engage with the toolkit more fully (63%). Forty percent of participants felt the toolkit was not needed, 26% thought people may have encountered technology issues related to the toolkit or social media push, and 23% reported that people receive too many emails in general.

When participants were asked why they thought participants seldom opened listserv emails or joined the social media groups, the majority indicated that they do not use social media (or don’t use it for work purposes; 79%), or that people get too much email in general and there is too much competition for their time (67%).

Satisfaction with trial process

Most participants (67%) had positive comments about the listserv emails sent during the trial. There were relatively few comments about the social media posts because few participants joined the social media groups (as previously discussed).

Effective training

We explored whether participants felt their existing online ethics training prepared them to implement ECPs. For all ECPs, participants felt their training did not prepare them to implement the practice—only between 14 and 40% felt that their training prepared them. When asked whether they thought a toolkit, like ConsentTools.org, would help their peers implement the practices, the vast majority (84%) reported that it would. This suggests that current ethics training do not prepare researchers to implement ECPs, but that researchers believe a toolkit would.

Discussion

The intervention to increase the use of ECPs among clinical research professionals was only modestly successful. As evidence of success, our survey and interview data suggest that participants were satisfied with the toolkit and social media push, with a majority reporting that the toolkit would help clinical researchers adopt the practices. They also reported that being in the study increased their awareness of the practices. The majority had positive comments on the listserv emails and thought the amount sent was just right. Finally, we found that participants who were engaged in trial activities were significantly more likely to attempt to implement at least one of the practices for the first time, which suggests the toolkit may play an important role in encouraging behavior change for those who have never tried implementing a practice before. Taken together, these findings suggest that the toolkit is useful to those who use it.

However, there were no significant differences between groups on adoption of ECPs, intention to adopt, or positive attitudes. The top reason for not engaging more fully with the toolkit was that research professionals are too busy or lack the time needed. Other commonly reported reasons were that the toolkit was not needed, or that they were satisfied with their current consent practices. Of note, as observed in the qualitative interview results, participants often had notions of consent best practices that reflected minimum regulatory requirements (e.g., elements of consent) or practices with intuitive appeal (e.g., informally assessing understanding) over practices that are informed by level 1 or 2 evidence from systematic studies such as those promoted in this trial.

While the toolkit and listserv emails were well received, the social media approach seems to have been problematic. The social media groups were private; members had to request to join, and our posts could not be shared beyond the group. We set up our social media groups in this fashion to prevent contamination across trial groups, but this thwarts key “social” elements of social media. Furthermore, participants reported not using social media for work purposes, not using it at all, or preferring work-related communications to come via email. Thus, using social media and listserv emails to encourage clinical research professionals to adopt evidence-based practices seems to have limited utility, at least when used in a controlled fashion. We believe social media will prove far more useful as we shift to our dissemination phase, where we do not need to limit use to one group.

Implementation of evidence-based practices is associated with organizational climates that are less stressful and more proficient (e.g., workplaces that require individuals to have up-to-date knowledge and be effective in their work) [Reference Aarons, Glisson and Green47]. Given that participants’ top reason for not engaging with the toolkit was that they were too busy suggests that the current organizational climate may not be ideal for the voluntary adoption of ECPs. Furthermore, the data suggest that clinical research professionals equate consent best practices with those practices that are required by an IRB (e.g., reviewing risks, voluntariness, etc.). The majority also reported that having their IRB strongly recommending the practices would be the most effective approach to getting their peers to adopt ECPs, which is almost certainly true, as studies cannot move forward without IRB approval. Additionally, engagement with the toolkit was low, even though our participants voluntarily enrolled in a study aimed at increasing the use of ECPs. Encouraging more voluntary training is not likely to be effective. Further, given that there were few differences even among those who engaged in the intervention and those who did not, we are not convinced that training of any sort will lead to behavior changes. We suspect the solution may be for institutions or regulations to mandate the use of ECPs.

The view that regulatory and institutional requirements drive consent practices is consistent with our previously published analyses of our baseline data. Adoption of ECPs was associated with very few characteristics of participants, their institutions, or the kind of research they conduct—and then only weakly and perhaps as a byproduct of our large sample size (n = 1284) [Reference Solomon, Mozersky and Wroblewski12,Reference Solomon, Mozersky and Baldwin13]. This would make sense if the key driver of adoption is regulatory requirements, which transcend differences among institutions and individuals.

When this study was first planned, we explored the feasibility of randomizing IRBs to our interventions; preliminary discussions with leaders in the field indicated that no IRB would agree to be assigned to a group that did not receive full support for the use of ECPs. Absent changes to federal regulations on human subjects’ protections, IRBs may also be reluctant to require or even strongly recommend practices that investigators might perceive as burdensome. To this end, our toolkit cites evidence that the ECPs are generally associated with very positive outcomes for investigators as well as participants. For example, assessing participant understanding using a validated instrument is relatively low burden (requiring only 5 min using the UBACC [Reference Jeste, Palmer and Appelbaum5]), while potentially increasing the efficiency of IRB review by reducing concerns about the consent process [Reference Stark48]; having a process for appointing LARs can help recruit and retain participants at risk of cognitive impairments, and most participants support this practice [Reference Kim, Karlawish, Kim, Wall, Bozoki and Appelbaum21,Reference De Vries, Ryan and Stanczyk49].

Our implementation trial had limitations. We used criterion-based, convenience sampling, so our sample may not be representative of all clinical research professionals. However, the sample was large, diverse, and adequately powered to detect significant differences in all key outcome variables. We used cluster randomization in a trial with individual-level interventions and outcomes for the most common reason: to avoid contamination [Reference Easter, Thompson, Eldridge, Taljaard and Hemming50]. Such designs suffer from a heightened risk of recruitment bias when recruitment occurs after randomization [Reference Easter, Thompson, Eldridge, Taljaard and Hemming50]. However, in our case, all participants were recruited prior to randomization, and when they were assigned to a group months after completing the baseline survey, they were not informed about interventions provided to their nonassigned group. We used self-reported data because consent protocols and forms are generally not publicly available. (Although the revised Common Rule requires that consent forms be posted publicly after recruitment closes, it would still be difficult to track our full set of outcomes, especially soon after our intervention.) [37] Finally, the majority of the study was conducted during the COVID-19 pandemic. We delayed the push of our toolkit until August 2020, because many clinical trials were frozen in the early days of our study. While our follow-up survey found that most participants opened enough trials during the past year (5.1 on average) to provide adequate opportunity to adopt new practices, and the vast majority reported that the pandemic did not affect their ability to use the practices, it is unknown whether the burdens of the pandemic negatively impacted engagement with the study materials.

Future research might focus on the ways in which a toolkit can support the adoption of practices within the context of required or strongly recommended adoption: can a toolkit improve the quality, ease, and timeliness of adoption? Next steps in our project will involve exploring which aspects of the toolkit most increase clinical research professionals’ confidence that they have the resources needed to adopt ECPs and systematic dissemination of our toolkit to clinical research professionals and IRB members.

Acknowledgements

We thank Isabelle Howerton for her assistance in developing the social media approach. We thank Ruby Varghese with her assistance with participant recruitment. We thank Matthew Wroblewski for his assistance with participant recruitment and sending listserv emails. This research was supported by National Institute on Aging grant 5R01AG058254 (EDS, JM, MG, MVP, KB, ABF, JKH, and JMD) and the National Center for Advancing Translational Sciences grant UL1TR002345 (JM, KB, JMD).

Disclosures

The authors declare no conflicts of interest.

References

Faden, RR, Beauchamp, TL. A history and theory of informed consent. vol. 15. Oxford University Press, 1986. pp. 392.Google Scholar
Emanuel, EJ, Wendler, D, Grady, C. What makes clinical research ethical? The Journal of the American Medical Association 2000; 283(20): 27012711.CrossRefGoogle ScholarPubMed
Montalvo, W, Larson, E. Participant comprehension of research for which they volunteer: a systematic review. Journal of Nursing Scholarship 2014; 46(6): 423431. DOI: 10.1111/jnu.12097.CrossRefGoogle ScholarPubMed
Sessums, LL, Zembrzuska, H, Jackson, JL. Does this patient have medical decision-making capacity? The Journal of the American Medical Association 2011; 306(4): 420427. DOI: 10.1001/jama.2011.1023.CrossRefGoogle ScholarPubMed
Jeste, DV, Palmer, BW, Appelbaum, PS, et al. A new brief instrument for assessing decisional capacity for clinical research. Archives of General Psychiatry 2007; 64(8): 966974. DOI: 10.1001/archpsyc.64.8.966.CrossRefGoogle ScholarPubMed
Mundi, S, Chaudhry, H, Bhandari, M. Systematic review on the inclusion of patients with cognitive impairment in hip fracture trials: a missed opportunity? Canadian Journal of Surgery 2014; 57(4): E141E145.CrossRefGoogle ScholarPubMed
Taylor, JS, DeMers, SM, Vig, EK, Borson, S. The disappearing subject: exclusion of people with cognitive impairment and dementia from geriatrics research. Journal of the American Geriatrics Society 2012; 60(3): 413419. DOI: 10.1111/j.1532-5415.2011.03847.x.CrossRefGoogle ScholarPubMed
Prusaczyk, B, Cherney, SM, Carpenter, CR, DuBois, JM. Informed consent to research with cognitively impaired adults: transdisciplinary challenges and opportunities. Clinical Gerontologist 2017; 40(1): 6373. DOI: 10.1080/07317115.2016.1201714.CrossRefGoogle ScholarPubMed
Mozersky, J, Wroblewski, MP, Solomon, ED, DuBois, JM. How are US institutions implementing the new key information requirement? Journal of Clinical and Translational Science 2020; 4(4): 365369. DOI: 10.1017/cts.2020.1.CrossRefGoogle ScholarPubMed
Paasche-Orlow, MK, Brancati, FL, Taylor, HA, Jain, S, Pandit, A, Wolf, M. Readability of consent form templates: a second look. IRB: Ethics & Human Research 2013; 35(4): 1219.Google ScholarPubMed
Paasche-Orlow, MK, Taylor, HA, Brancati, FL. Readability standards for informed-consent forms as compared with actual readability. New England Journal of Medicine 2003; 348: 721726.CrossRefGoogle ScholarPubMed
Solomon, ED, Mozersky, J, Wroblewski, MP, et al. Understanding the use of optimal formatting and plain language when presenting key information in clinical trials. Journal of Empircal Research on Human Research Ethics 2022; 17(1–2): 177192. DOI: 10.1177/15562646211037546.CrossRefGoogle ScholarPubMed
Solomon, ED, Mozersky, J, Baldwin, K, et al. Perceived barriers to assessing understanding and appreciation of informed consent in clinical trials: A mixed-method study. Journal of Clinical and Translational Science 2021; 5(1): e164. DOI: 10.1017/cts.2021.807.CrossRefGoogle ScholarPubMed
Brothers, BM, Carpenter, KM, Shelby, RA, et al. Dissemination of an evidence-based treatment for cancer patients: training is the necessary first step. Translational Behavioral Medicine 2014; 5(1): 103112. DOI: 10.1007/s13142-014-0273-0.CrossRefGoogle Scholar
Damschroder, LJ, Aron, DC, Keith, RE, Kirsh, SR, Alexander, JA, Lowery, JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science 2009; 4: 15p. DOI: 10.1186/1748-5908-4-50.CrossRefGoogle Scholar
Ajzen, I. The theory of planned behavior. Organizational Behavior and Human Decision Processes 1991; 50(2): 179211. DOI: 10.1016/0749-5978(91)90020-T.CrossRefGoogle Scholar
Godin, G, Belanger-Gravel, A, Eccles, M, Grimshaw, J. Healthcare professionals’ intentions and behaviours: a systematic review of studies based on social cognitive theories. Implementation Science 2008; 3: 36. DOI: 10.1186/1748-5908-3-36.CrossRefGoogle ScholarPubMed
The Plain Language Action and Information Network. Federal Plain Language Guidelines. United States Government. 2022. (https://www.plainlanguage.gov/guidelines/)Google Scholar
Plain Language Association International. PLAIN. 2020. (https://plainlanguagenetwork.org/)Google Scholar
Kim, SYH, Kim, HM, McCallum, C, Tariot, PN. What do people at risk for alzheimer disease think about surrogate consent for research? Neurology 2005; 65(9): 13951401. DOI: 10.1212/01.wnl.0000183144.61428.73.CrossRefGoogle ScholarPubMed
Kim, SY, Karlawish, JH, Kim, HM, Wall, IF, Bozoki, AC, Appelbaum, PS. Preservation of the capacity to appoint a proxy decision maker: implications for dementia research. Archives of General Psychiatry 2011; 68(2): 214220. DOI: 10.1001/archgenpsychiatry.2010.191.CrossRefGoogle ScholarPubMed
Agre, P, Campbell, FA, Goldman, BD, et al. Improving informed consent: the medium is not the message. IRB: Ethics & Human Research 2003; 25(5): S11S19. DOI: 10.2307/3564117.CrossRefGoogle Scholar
Flory, J, Emanuel, EJ. Interventions to improve research participants’ understanding in informed consent for research: a systematic review. The Journal of the American Medical Association 2004; 292(13): 15931601. DOI: 10.1001/jama.292.13.1593.CrossRefGoogle ScholarPubMed
Holmes-Rovner, M, Stableford, S, Fagerlin, A, et al. Evidence-based patient choice: a prostate cancer decision aid in plain language. BMC Medical Informatics and Decision Making 2005; 5(1): 16. DOI: 10.1186/1472-6947-5-16.CrossRefGoogle Scholar
Jefford, M, Moore, R. Improvement of informed consent and the quality of consent documents. Lancet Oncology 2008; 9(5): 485493. DOI: 10.1016/S1470-2045(08)70128-1.CrossRefGoogle ScholarPubMed
Kim, EJ, Kim, SH. Simplification improves understanding of informed consent information in clinical trials regardless of health literacy level. Clinical Trials 2015; 12(3): 232236. DOI: 10.1177/1740774515571139.CrossRefGoogle ScholarPubMed
Nishimura, A, Carey, J, Erwin, PJ, Tilburt, JC, Murad, MH, McCormick, JB. Improving understanding in the research informed consent process: a systematic review of 54 interventions tested in randomized control trials. BMC Medical Ethics 2013; 14: 28. DOI: 10.1186/1472-6939-14-28.CrossRefGoogle ScholarPubMed
Rubright, J, Sankar, P, Casarett, DJ, Gur, R, Xie, SX, Karlawish, JH. A memory and organizational aid improves alzheimer disease research consent capacity: results of a randomized, controlled trial. American Journal of Geriatric Psychiarty 2010; 18(12): 11241132. DOI: 10.1097/JGP.0b013e3181dd1c3b.CrossRefGoogle ScholarPubMed
Dunn, LB, Nowrangi, MA, Palmer, BW, Jeste, DV, Saks, ER. Assessing decisional capacity for clinical research or treatment: a review of instruments. The American Journal of Psychiatry 2006; 163(8): 13231334. DOI: 10.1176/appi.ajp.163.8.1323.CrossRefGoogle ScholarPubMed
Campbell, FA, Goldman, BD, Boccia, ML, Skinner, M. The effect of format modifications and reading comprehension on recall of informed consent information by low-income parents: a comparison of print, video, and computer-based presentations. Patient Education and Counseling 2004; 53(2): 205216. DOI: 10.1016/S0738-3991(03)00162-9.CrossRefGoogle ScholarPubMed
Eccles, MP, Mittman, BS. Welcome to implementation science. Implementation Science 2006; 1(1): 1. DOI: 10.1186/1748-5908-1-1.CrossRefGoogle Scholar
Brownson, RC, Colditz, GA, Proctor, EK. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. vol. 23. Oxford: Oxford University Press. 2018. pp. 536.Google Scholar
Haynes, RB, Holland, J, Cotoi, C, et al. McMaster plus: a cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. Journal of the American Medical Informatics Association 2006; 13(6): 593600. DOI: 10.1197/jamia.M2158.CrossRefGoogle Scholar
Cervero, RM, Gaines, JK. The impact of cme on physician performance and patient health outcomes: an updated synthesis of systematic reviews. The Journal of Continuing Education in the Health Professions 2015; 35(2): 131138. DOI: 10.1002/chp.21290.CrossRefGoogle ScholarPubMed
Gifford, DR, Holloway, RG, Frankel, MR. Improving adherence to dementia guidelines through education and opinion leaders. Annals of Internal Medicine 1999; 131(4): 237246. DOI: 10.7326/0003-4819-131-4-199908170-00002.CrossRefGoogle ScholarPubMed
Hadden, KB, Prince, L, James, L, Holland, J, Trudeau, CR. Readability of human subjects training materials for research. Journal of Empirical Research on Human Research Ethics 2019; 13(1): 95100. DOI: 10.1177/1556264617742238.CrossRefGoogle Scholar
Agency for Healthcare Research and Quality. Ahrq publishing and communications guidelines. (http://www.ahrq.gov/research/publications/pubcomguide/pcguide6.html) (Accessed 3 November 2016)Google Scholar
DuBois, JM, Antes, AL. Five dimensions of research ethics: a stakeholder framework for creating a climate of research integrity. Academic Medicine 2018; 93(4): 550555. DOI: 10.1097/ACM.0000000000001966.CrossRefGoogle ScholarPubMed
DuBois, JM, Chibnall, JT, Gibbs, JC. Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics 2015; 22(4): 965. DOI: 10.1007/s11948-015-9681-x.CrossRefGoogle ScholarPubMed
DuBois, JM, Chibnall, JT, Tait, RC, et al. Professional decision-making in research (pdr): the validity of a new measure. Science and Engineering Ethics 2015; 22(2): 391416. DOI: 10.1007/s11948-015-9667-8.CrossRefGoogle ScholarPubMed
English, T, Antes, AL, Baldwin, KA, DuBois, JM. Development and preliminary validation of a new measure of values in scientific work. Science and Engineering Ethics 2017; 24(2): 393418. DOI: 10.1007/s11948–017–9896–0.CrossRefGoogle Scholar
NCSS. PASS 15 Power Analysis and Sample Size Software. Kaysville, UT, USA: LLC, 2017.Google Scholar
Jennison, C, Turnbull, BW. Group Sequential Methods with Applications to Clinical Trials. Boca Raton, FL, USA: Chapman & Hall, 2000.Google Scholar
Zar, JH. Biostatistical Analysis. 2nd ed. Englewood Cliffs, NJ, USA: Prentice-Hall, 1984.Google Scholar
Saldaña, J. The Coding Manual for Qualitative Researchers. 3rd ed. Thousand Oaks, CA, USA: Sage Publications Ltd, 2016.Google Scholar
Aarons, GA, Glisson, C, Green, PD, et al. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: a United States national study. Implementation Science 2012; 7(56). DOI: 10.1186/1748–5908–7–56.CrossRefGoogle Scholar
Stark, L. Behind Closed Doors: IRBS and the Making of Ethical Research. Chicago, IL, USA: University of Chicago Press, 2012.Google Scholar
De Vries, R, Ryan, KA, Stanczyk, AE, et al. Public’s approach to surrogate consent for dementia research: Cautious pragmatism. American Journal of Pediatric Psychiatry 2013; 21(4): 364372.CrossRefGoogle ScholarPubMed
Easter, C, Thompson, JA, Eldridge, S, Taljaard, M, Hemming, K. Cluster randomized trials of individual-level interventions were at high risk of bias. Journal of Clinical Epidemiology 2021; 138: 4959. DOI: 10.1016/j.jclinepi.2021.06.021.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Overview of trial procedures.

Figure 1

Fig. 2. ConsentTools.org website map. Note. LAR = legally authorized representative. UBACC = University of California, San Diego Brief Assessment of Capacity to Consent.

Figure 2

Table 1. Characteristics of the implementation trial participants

Figure 3

Table 2. Implementation trial results for both primary and secondary outcomes at the one-year follow-up survey

Figure 4

Table 3. Follow-up survey results on why the implementation trial engagement was low

Figure 5

Table 4. Post-Trial interview questions and results