Hostname: page-component-8448b6f56d-m8qmq Total loading time: 0 Render date: 2024-04-23T19:44:15.861Z Has data issue: false hasContentIssue false

Perceived barriers to assessing understanding and appreciation of informed consent in clinical trials: A mixed-method study

Published online by Cambridge University Press:  28 June 2021

Erin D. Solomon
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Jessica Mozersky
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Kari Baldwin
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Matthew P. Wroblewski
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Meredith V. Parsons
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
Melody Goodman
Affiliation:
School of Global Public Health, New York University, New York, NY, USA
James M. DuBois*
Affiliation:
Bioethics Research Center, Washington University School of Medicine, St. Louis, MO, USA
*
Address for correspondence: J. M. DuBois, DSC, PHD, Bioethics Research Center, Division of General Medical Sciences, Department of Medicine, Washington University School of Medicine, 4523 Clayton Avenue, Box 8005, St. Louis, MO63110, USA. Tel: 1-314-747-2710. Email: duboisjm@wustl.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

Participants and research professionals often overestimate how well participants understand and appreciate consent information for clinical trials, and experts often vary in their determinations of participant’s capacity to consent to research. Past research has developed and validated instruments designed to assess participant understanding and appreciation, but the frequency with which they are utilized is unknown.

Methods:

We administered a survey to clinical researchers working with older adults or those at risk of cognitive impairment (N = 1284), supplemented by qualitative interviews (N = 60).

Results:

We found that using a validated assessment of consent is relatively uncommon, being used by only 44% of researchers who had an opportunity. Factors that predicted adoption of validated assessments included not seeing the study sponsor as a barrier, positive attitudes toward assessments, and being confident that they had the resources needed to implement an assessment. The perceived barriers to adopting validated assessments of consent included lack of awareness, lack of knowledge, being unsure of how to administer such an assessment, and the burden associated with implementing this practice.

Conclusions:

Increasing the use of validated assessments of consent will require educating researchers on the practice and emphasizing very practical assessments, and may require Institutional Review Boards (IRBs) or study sponsors to champion the use of assessments.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Introduction

The ability to make informed decisions about research participation is fundamental to the ethical premise of respect for autonomy [Reference Faden and Beauchamp1]. In line with this premise, federal regulations require that participants have adequate information before deciding on research participation and that a participant’s informed consent be obtained prior to participating [Reference Faden and Beauchamp1, 2]. Making a decision to participate in research generally involves four parts: understanding, appreciation, reasoning, and expressing a clear choice [Reference Faden and Beauchamp1, Reference Grisso and Appelbaum3]. Understanding is the ability to understand or know the meaning of information presented, while appreciation is the ability to recognize how information is relevant to oneself [Reference Faden and Beauchamp1, Reference Grisso and Appelbaum3]. Reasoning is using the information to weigh options, and expressing a choice is the ability to clearly express a decision [Reference Faden and Beauchamp1, Reference Grisso and Appelbaum3]. All four components are essential to the consent process and ensure that participants can use consent information to make a decision in line with their own preferences [Reference Grisso and Appelbaum3, Reference Appelbaum and Roth4].

A challenge to informed consent in research settings is that participants and research professionals often overestimate how well participants understand consent information and experts often vary in their determinations of a participant’s capacity to consent [Reference Palmer and Harmell5Reference Jeste, Palmer and Appelbaum8]. There are numerous reasons why participants fail to understand and appreciate consent information. They may have a cognitive impairment, caused by neurological, psychiatric, or other medical diagnoses [Reference Karlawish9Reference Jeste, Depp and Palmer15]. Older adults (age 65+), regardless of diagnoses, are at increased risk of cognitive impairment [Reference Plassman, Langa and Fisher16, Reference Prusaczyk, Cheney, Carpenter and DuBois17]. Furthermore, participants with cognitive impairment may find their ability to understand consent information changes from one day to the next. It is also possible that the consent information was presented in an unclear manner, perhaps using dense prose and technical or legal language. Language proficiency may pose another barrier [Reference Barstow, Shahan and Roberts18]. Finally, research participation is often offered following a new diagnosis, when the participant may be already feeling overwhelmed with information [Reference Young, Kim and Li19].

The National Institutes of Health recently put forth the Inclusion Across the Lifespan policy, which mandates that older adults be included in research unless there is a scientific or ethical reason to exclude them [20]. Older adults have routinely been excluded from clinical research, often because they are at higher risk of cognitive impairments, and therefore may face challenges with informed consent [Reference Plassman, Langa and Fisher16, Reference Prusaczyk, Cheney, Carpenter and DuBois17]. As more older adults are necessarily included in research in the coming years, the odds of enrolling participants with cognitive impairments increases. Having procedures in place to determine participants’ level of consent understanding and appreciation will therefore become essential.

One method for ensuring participant understanding and appreciation is to use a validated informed consent assessment instrument. Some assessments are specific to one domain (e.g. cancer or drug trials) [Reference Joffe, Cook, Cleary, Clark and Weeks21, Reference Miller, O’Donnell, Searight and Barbarash22] or present a hypothetical study to which the participant must provide the correct answers about the study [Reference Appelbaum and Grisso23]. Others assess participants’ understanding and appreciation of the study they are being asked to participate in [Reference Jeste, Palmer and Appelbaum8, Reference Palmer, Dunn and Appelbaum24]. Assessments may take anywhere from 5 min to 20 min [Reference Jeste, Palmer and Appelbaum8, Reference Appelbaum and Grisso23].

Assessing participants’ understanding and appreciation in clinical trials has several benefits. First, it ensures that all participants understand and appreciate the information presented to them during the consent process and removes the need for researchers to rely only on their clinical judgment. Second, it helps to identify participants who may require additional education or clarification regarding aspects of the research. Research is often complex and unfamiliar to participants, and they may need to have parts of the study explained to them more than once in order to fully understand and appreciate it [Reference Paasche-Orlow, Brancati, Taylor, Jain, Pandit and Wolf25, Reference Kass, Chaisson, Taylor and Lohse26]. Third, it can identify weaknesses in the consent process. For example, if participants tend to misunderstand the same piece of information, perhaps the consent process needs modification. Fourth, assessing all participants avoids unfairly targeting or stigmatizing those with certain diagnoses or other characteristics (such as dementia, schizophrenia, etc.) by presuming their ability to understand the information is limited. Fifth, it may expedite IRB approval by reducing concerns about the informed consent process (the most common concern voiced by IRB members) [Reference Stark27]. Last, it can identify participants who may need a Legally Authorized Representative (LAR) to consent on their behalf. Participants who continue to demonstrate inadequate understanding, even after additional education is provided, may require assistance with decision making. In such cases, the participant may need to provide their assent, and lower scores on an assessment may be acceptable for this purpose.

There is no regulatory standard for when to use an assessment of consent understanding and appreciation, and IRB guidance can vary [Reference Dunn, Nowrangi, Palmer, Jeste and Saks28]. Furthermore, there are no published data on how frequently researchers assess consent understanding and appreciation, or when it would be appropriate to do so. For example, trials involving only minimal risk—such as a single blood draw—may not require an assessment due to their relative simplicity. However, as the level of risk of a study increases, so does the need to determine and document that participants have understood the information before deciding to take part [Reference Dunn, Nowrangi, Palmer, Jeste and Saks28].

The current research was part of a larger implementation science project focused on increasing the use of evidence-based informed consent practices among researchers in the USA (NIA R01AG058254). The data from this study will inform a trial that ultimately seeks to increase adoption (i.e., implementation) of these practices, one of which is using validated consent assessments. We utilized the Consolidated Framework for Implementation Research (CFIR) to guide the overall project [Reference Damschroder, Aron, Keith, Kirsh, Alexander and Lowery29, Reference Eccles and Mittman30]. CFIR is composed of five domains: a) the characteristics of individuals who are targeted to implement the new practice, [Reference Graham and Logan31, Reference Pettigrew and Whipp32] b) their outer setting (e.g., Office of Human Research Protection regulations or study sponsors) [Reference Pettigrew and Whipp32Reference Mendel, Meredith, Schoenbaum, Sherbourne and Wells35], c) their inner setting (e.g., their local IRB or study protocols) [Reference Pettigrew and Whipp32, Reference Stetler34, Reference Kilbourne, Neumann, Pincus, Bauer and Stall36], d) the characteristics of the intervention being implemented [Reference Pettigrew and Whipp32], and e) the process of implementation [Reference Pettigrew and Whipp32, Reference Kitson, Ahmed, Harvey, Seers and Thompson37]. To our knowledge, this will be the first implementation science trial conducted within the domain of research ethics and informed consent.

For the current research, we sought to better understand three broad questions pertaining to the use of validated consent assessments.

  1. 1. How widespread is the use of validated consent assessments? We seek to understand the current rates of adoption to establish a baseline rate of use.

  2. 2. What modifiable factors predict adoption of validated consent assessments? We seek to understand the modifiable factors associated with adoption of validated assessments. This will yield important information about what interventions might increase researchers’ adoption of the practices for our upcoming implementation trial. We defined modifiable as any factor that could be targeted for change in our upcoming trial. For example, attitudes are potentially modifiable, but funding sources are not.

  3. 3. What are the perceived barriers to adopting validated consent assessments? Understanding the perceived barriers to adoption will help determine how to improve researchers’ use of validated assessments.

Materials and Methods

We used a mixed-methods approach, collecting both quantitative survey data and qualitative interview data, to gather a more complete picture of how often validated assessments are used and the perceived barriers to using them [Reference Creswell38]. This research was approved by the Washington University in St. Louis IRB (#201807033 and #201909154). The study samples consisted of principal investigators (PIs), clinical research coordinators (CRCs), and (in the qualitative interviews only) IRB members. PIs were included in the samples because even though they often do not obtain consent, they are ultimately responsible for how their trials are conducted and have the ability and authority to make changes to the consent process. CRCs were included in the samples because they obtain consent. IRB members were included in the qualitative interview sample because they are an integral aspect of the ethical conduct of research, and all consent procedures have to be approved by the IRB. Thus, their views were a particularly valuable addition to the research.

Quantitative Survey

Survey development

PhD-level experts in the fields of research ethics, bioethics, and survey design wrote all items with expert input from PIs, CRCs, and IRB members [Reference DuBois, Chibnall and Tait39Reference DuBois, Chibnall and Gibbs42]. We modified some items to create a PI version and CRC version where relevant. The survey focused on multiple evidence-based consent practices, and we present here only the data on using validated assessments of consent. Cognitive interviews on the survey items were conducted with individuals with expertise in informed consent regulations, conducting consent procedures, and/or designing consent protocols (N = 8). Following the cognitive interviews, items were revised to improve clarity and reduce overall survey length and burden.

Measures

The survey instrument explored the use of assessments, perceived barriers to using assessments, attitudes toward assessments, and confidence in having the resources needed to use assessments. We also measured social desirability as a control variable and collected demographic information.

Personal adoption

To measure personal adoption of the practice, we first presented a short description of validated consent assessments to ensure that all participants were aware of what they were:

[One consent] practice is assessing participants’ understanding and appreciation of informed consent information using a validated assessment tool. Such tools often involve scoring participant responses to questions about the consent form to determine whether they understand the study and what they are being asked to do. Validated assessments are ones that have been peer reviewed and published.

We then asked how many clinical trial protocols they had submitted to an IRB that included an intervention of greater than minimal risk in the past year, and how many of those protocols they personally modified to add a validated assessment. Adoption was calculated by dividing the number of protocols they had added a validated assessment to by the number of greater-than-minimal-risk intervention protocols they had submitted to the IRB in the past year. After multiplying by 100, personal adoption is expressed as a percentage.

Reasons for not adopting

Participants who did not report using the practice at all were presented with a list of options as to why they did not use the practice (e.g., “I did not think this practice was important,” “I was unaware of this practice”).

Change already made

Two of the reasons for nonadoption, “My research team, group, or lab already uses this practice” and, “The sponsor already required it,” indicated that a participant did not have an opportunity to adopt the practice because adoption had already occurred. If either of these response options were endorsed, the participant was considered to have already adopted the practice, although they had not personally made the change. When combined with the personal adoption rate, this yielded an overall adoption rate.

Barriers

All participants, regardless of whether they had reported using the practice, were asked if anyone might prevent them from implementing it. If they responded “yes,” they were presented with a list of options (i.e., “IRB,” “sponsor,” “participants,” “research team members,” and “other”).

Positive attitudes

We measured attitudes toward using validated assessments of consent with two questions, “How useful do you think this practice is in enhancing research participants’ understanding of consent information?” (1 = not at all useful, 5 = extremely useful) and “How interested are you in improving your use of this practice?” (1 = not at all interested, 5 = extremely interested). We summed the responses to these items to produce a positive attitudes score that ranged from 2 to 10.

Confidence in resources

We asked participants “How confident are you that you have the resources you need to use this practice well?” (1 = not at all confident, 5 = extremely confident).

Marlowe–Crowne Social Desirability Scale

The short form version of the Marlowe–Crowne Social Desirability Scale was used to measure social desirability [Reference Crowne and Marlowe43]. The scale generates a score range of 1 – 13 with higher scores indicating a higher desire for social approval [Reference Crowne and Marlowe43]. The scale has a KR20 reliability score of .88; [Reference Crowne and Marlowe43] in the current study, the KR20 was .67.

Demographics

We collected data on participant’s gender, age, race, education, and information about trials they worked on.

Survey participants

We recruited the quantitative survey participants (N = 1284) using nonprobability, criterion-based sampling. We targeted researchers whose participants have cognitive impairments or whose studies are open to older adults (age 65+) because they are at higher risk for cognitive impairment [Reference Plassman, Langa and Fisher16, Reference Prusaczyk, Cheney, Carpenter and DuBois17]. We targeted researchers working in the USA because regulations on informed consent vary importantly across nations.

We used two methods to recruit survey participants. First, we created a recruitment database by querying the Aggregate Analysis of ClinicalTrials.gov (AACT) database which houses publicly available information about clinical studies [44]. The database included 20,613 researchers working on interventional clinical trials focused on Alzheimer’s disease (527) or involving participants age 65 or older (20,086). All participants were then contacted via E-mail. Second, we posted a recruitment message to the Association of Clinical Research Professionals (ACRP) social media groups (i.e., Facebook and LinkedIn) and sent two recruitment E-mails to 9,774 of ACRP’s E-mail list members. In each of these recruitment methods, a link to our online Qualtrics survey was provided. Participants provided their informed consent prior to completing the survey and received a $20 Amazon eGift Card for participating.

We screened participants to verify that they were a CRC or PI, worked in the United States, and expected to be involved in at least one new clinical intervention trial that would open within the next 18 months. CRCs were also asked if they prepared informed consent documents, assisted in preparing consent documents, or obtained informed consent from participants in clinical trials that involve interventions. We removed from the data any participants who screened out (N = 618) or completed the survey multiple times (N = 27). Participants who completed the survey in under five minutes or provided impossible responses to more than one of the consent practice adoption items (i.e., claiming to have added a consent practice to more protocols in the past year than they had submitted to an IRB in the past year) were also removed from the study (N = 67) [Reference Leiner45]. Thirty-one cases were retained in the data set as a whole, but excluded from analyses using personal adoption of assessments for providing an impossible response to that item.

Qualitative Interviews

Interview guide development

Interview questions were developed using the CFIR model as a framework. Each stakeholder group interviewed (i.e., PIs, CRCs, and IRB members) had their own semi-structured interview guide with adapted open-ended questions; however, each interview followed a similar format. Participants were asked about their current informed consent practices and their attitudes toward evidence-based consent practices. Validated consent assessments were described in the interview guide using similar language used in the quantitative survey. The interviews were conducted prior to administering the quantitative survey.

Interview participants

We interviewed PIs, CRCs, and IRB members (total N = 60). PIs (N = 20), and CRCs (N = 20) were identified through trial listings on ClinicalTrials.gov. We used purposive sampling to ensure that the sample represented researchers conducting trials with Alzheimer’s disease patients (CRCs 80%, PIs 75%). IRB members (N = 20) were identified through the websites of the 32 US institutions with an NIA Alzheimer’s Disease Research Center (ADRC) [46] and through institutions that were a part of the American Association of Medical Colleges (AAMC) [47]. IRB members needed to be voting members of their IRB and to have reviewed at least one clinical trial protocol involving older adults or individuals with cognitive impairments in the past year in order to participate.

Participants were recruited via E-mail, provided informed consent, and completed a demographic survey. Participants then completed a one-hour, semi-structured telephone interview and received a $40 Amazon eGift Card for participating. All interviews were audio recorded and professionally transcribed.

Data Analysis

We used SPSS version 26 and Stata 16 to analyze the quantitative survey data. For Research Question 1 (rates of adoption), we calculated the mean percentage of personal adoption and the total number of participants who had used the practice. We used regression analyses for Research Question 2 (predictors of adoption). We entered the Marlowe–Crowne Social Desirability scale and the “change already made” variable into block 1 of the regression to account for socially desirable responding and for those who did not have the opportunity to personally modify their protocol and adopt the practice. We identified variables that could be modified by an intervention and entered them into block 2 of the regression. These variables were barriers, positive attitudes, and confidence in resources. To keep the regression models parsimonious and to keep with our focus on informing implementation efforts, we did not include variables that are unable to be influenced, such as funding source. Analysis for Research Question 3 (barriers to adoption) involved tallying the percentage of survey participants indicating various reasons for not adopting the practice and types of barriers reported.

We used Dedoose software to analyze the qualitative interview transcripts. We used a mixture of inductive and deductive coding, using CFIR to guide our codebook development [Reference Saldaña48]. Each stakeholder group was assigned one gold standard coder, and coders (KB and ES) were trained on the codebook. Coders were required to attain a Cohen’s kappa score at or above .80 before coding the data. Cohen’s kappa was calculated a second time mid-way through coding to prevent drift. During the coding period, the coders met weekly to resolve questions, and the team revised the codebook accordingly.

Results

Table 1 presents demographic results for the survey (N = 1284) and interview samples (N = 60).

Table 1. Demographic characteristics of quantitative survey and qualitative interview samples

Note. Quantitative survey sample N = 1284 (232 PIs and 1052 CRCs). Qualitative interview sample N = 60 (20 PIs, 20 CRCs, and 20 IRB members). aParticipants could select more than one response. b N = 936 for this variable only, because only 936 participants had an opportunity to adopt the practice by submitting at least 1 greater-than-minimal-risk protocol to the IRB in the past year and had a valid personal adoption score. PI, principal investigator; CRC, clinical research coordinator; IRB, institutional review board.

Research Question 1: Current Adoption Rates

Table 2 presents means and standard deviations for continuous variables of interest. Most participants (73%; n = 936) had submitted at least one protocol of greater than minimal risk to the IRB in the past year, providing an opportunity to modify a protocol to adopt an assessment. Of those with an opportunity to adopt, 25% (n = 236) of participants had personally modified at least one protocol to include a validated assessment in the past year, while the majority, 75%, (n = 700) did not (see Fig. 1). Of those nonadopters (n = 700), 25% (n = 173) reported that either someone else on their research team, group, or lab had already made the change or their sponsor already required it. After combining the personal adopters (n = 236) with the participants who reported that the change had already been made (n = 173), this meant that 44% of participants had implemented an assessment at least once in the past year. This means that 56% (n = 527) of participants with an opportunity to adopt did not use validated assessments at all in the past year. Among those who did modify protocols to include an assessment, they did so an average of 72% of the time. In other words, they personally modified 72% of the greater-than-minimal risk protocols they submitted to the IRB in the past year.

Table 2. Means and standard deviations of quantitative survey sample

Note. N = 1284. *Mean percentage for adoption rate is the average percent of trials modified to add an assessment, out of the total number of trials of greater than minimal risk they submitted to an institutional review board (IRB) in the past year. N = 936 for adoption rate because only 936 submitted ≥ 1 greater-than-minimal-risk protocol to the IRB in the past year indicating an opportunity to adopt and had a valid personal adoption score (i.e., 0–100%)

Fig. 1. Frequency of Assessment Adoption among Quantitative Survey Sample Participants Who Submitted ≥ 1 Greater-Than-Minimal-Risk Protocol in the Prior Year (N = 936).

Note. Numbers in figure are number of participants falling into each range of the adoption variable, and how many of the non-adopters reported that the change had already been made by either the study sponsor or another member of their research team. Only 73% (N = 936) of the sample is represented here because only 73% had submitted at least 1 greater-than-minimal-risk protocol to the IRB in the past year (which was the denominator for the adoption variable calculation) and had a valid personal adoption score (i.e., 0–100%).

Research Question 2: Predictors of Adoption

As seen in Table 3, the overall regression model in block 2 was significant, adj. R 2 = .18, p < .001. Results from block 2 show that not seeing the sponsor as a barrier, having more positive attitudes, and having more confidence in resources were all significant predictors of adopting validated assessments. This is a mixture of CFIR outer domain (sponsor) and individual domain (attitudes and confidence) variables.

Table 3. Regression analyses predicting adoption in the quantitative survey sample

Note. N = 936. The dependent variable was adoption of validated assessments of consent in the past year. Bolded variables were significant predictors of adoption. IRB, institutional review board.

Research Question 3: Reasons for Not Adopting and Barriers to Adoption

Quantitative survey

As seen in Table 4, the most common reasons for not adopting a validated assessment were “I was unaware of this practice,” “I’m not sure how to do this,”The sponsor already required it,” and “My research team, group, or lab already uses this practice.” One hundred and seventeen participants indicated “other.” Responses to our open-ended follow-up question indicate that they gauge their participants’ understanding informally without a validated tool, they were not responsible or not allowed to make this type of change to their protocols, or their sponsor or IRB may not allow this practice.

Table 4. Reasons assessments were not adopted and perceived barriers to adoption in quantitative survey sample

Note. For reasons for non-adoption, only participants reporting not adopting the practice answered the question (N = 700). For barriers to adoption, only participants reporting that someone might try to prevent them from using the practice answered the question (N = 258). Percentage of question responders is the percentage of participants who selected that response option out of those that answered the question. Percentage of sample is the percentage who selected that response option of the total number of participants in the quantitative survey sample (N = 1284). *Response option that comprises the change already made variable and does not represent a CFIR domain. Participants could select all response options that applied, and all response options are presented here. CFIR, Consolidated Framework for Implementation Research ; IRB, institutional review board.

In response to the item, “Do you think anyone might try to prevent you from using this practice?” 258 said “yes.” When asked who would prevent them, participants indicated “research team members” (52%), “sponsor” (44%), “IRB” (40%), and “participants” (20%). This is a mixture of CFIR inner domain (IRB, research team members, participants) and outer domain (sponsors) factors.

Qualitative interviews

Participants identified barriers and facilitators of adoption in the three relevant CFIR domains: individual characteristics, inner setting, and outer setting.

Individual characteristics

As seen in Table 5, the majority of PIs, CRCs, and IRB members reported positive opinions about using validated consent assessments (17 PIs, 18 CRCs, and 13 IRB); however, only 1 PI and 4 CRCs reported actually using one. Participants also expressed numerous concerns about validated consent assessments (17 PIs, 13 CRCs, and 7 IRB). Many of these concerns centered around the perception that participants may have negative feelings or reactions to being assessed (10 PIs, 8 CRCs, and 2 IRB) and doubting the value or trustworthiness of validated assessments (10 PIs, 2 CRCs, and 0 IRB). Other concerns included that using the practice might hurt enrollment, that it would not be needed for their study (often because they were already using an assessment of decisional capacity), or that they would need more training before they would be able to implement the practice. IRB members expressed the fewest concerns.

Table 5 Perceived barriers to adoption of practices indicated by qualitative interview participants

Note. Total N = 60 (20 PIs, 20 CRCs, and 20 IRB members). CFIR outer setting barriers were not included in the table because they were rarely mentioned by qualitative interview participants. CFIR, Consolidated Framework for Implementation Research; CRC, clinical research coordinator; IRB, institutional review board; PI, principal investigator.

We also identified a lack of knowledge as a barrier to adopting the practice (8 PIs, 8 CRCs, and 11 IRB). Individuals described their general unawareness of the practice, their relatively little experience with participants with cognitive impairments (assuming validated consent assessments were only for participants with cognitive impairments), needing training on how to administer the assessment, and what to do when participants provide incorrect responses, as barriers.

Outer setting barriers

The outer setting was rarely identified as a barrier, and only arose in 2 PI, 2 CRC, and 1 IRB member interview. Where mentioned, outer setting barriers included problems with accessibility for those whose first language is not English, and the difficulty of working with LARs in the event a participant fails a consent assessment.

Inner setting barriers

The most frequently cited inner barrier was burden (15 PIs, 11 CRCs, and 10 IRB). Burden primarily consisted of concerns about the time it would take to add an assessment to the already lengthy consent process, and the burden that extra time would place on participants. Some also indicated the time it would take to select an assessment, and the time it would take to train their team members on how to administer and score it. Notably, IRB members also tended to report the research team as a barrier (N = 10), which was not the case for PIs (N = 3) or CRCs (N = 1). IRB members reported that researchers would “push back” if they were asked to add an assessment to their consent procedures; some because they have been using the same procedures for a long time and are hesitant to change, and others because adding anything to an already long and complex process is undesirable.

Discussion

This study was premised on the idea that using a validated assessment is appropriate in clinical trials that involve greater-than-minimal risk interventions when enrolling older adults or individuals with cognitive impairments. The reasons for this are manifold: routinely using validated assessments can improve the consent process (e.g., by identifying poorly explained material), identify individuals who may require special consent procedures (e.g., further education or surrogate decision-makers), and reduce stigma by making it routine. Additionally, doing this requires little training and time (e.g., 5 min for the UBACC [Reference Jeste, Palmer and Appelbaum8]) and may offer additional benefits to researchers such as expediting IRB approval by reducing concerns about the informed consent process [Reference Stark27]. We found, however, that even in this special subgroup of trials, only a minority of clinical researchers (44%) used validated assessments in the past year and those who did use them did not use them consistently.

Education Is Needed to Dispel Misconceptions

The most common reasons for not implementing assessments was a lack of awareness of the practice and being unsure how to administer an assessment. This suggests that education and training are needed to raise awareness and educate researchers on how to select and administer assessments. In qualitative interviews, PIs were also concerned about the assessment instruments themselves, doubting the validity of the instruments and whether they actually assess understanding. This suggests that PIs in particular will need to be educated on validity evidence supporting the use of assessments.

IRBs and Sponsors May Need to Champion the Use of Assessments

Not seeing the sponsor as a barrier predicted adoption of assessments. Also, a common reason for not having adopted assessments in the past year was that sponsors already required the use of an assessment. This suggests that participants perceive sponsors as playing an important and decisive role in consent procedures. Additionally, IRBs were sometimes seen as barriers to implementing assessments by researchers, yet in qualitative interviews IRB members expressed the fewest concerns and were overall highly supportive of assessments. IRBs play an important gate-keeper role in the consent process by having the authority to approve or request modifications to consent processes to make them more ethical. If IRBs are perceived as opposing the use of assessment, researchers are less likely to implement them. IRBs may need to champion the use of assessments in order to change researcher perceptions and increase adoption going forward [Reference Thompson, Estabrooks, Scott-Findlay, Moore and Wallin49]. This is especially important because regulatory standards and guidelines are lacking [Reference Dunn, Nowrangi, Palmer, Jeste and Saks28].

Dissemination and Implementation Efforts Will Need to Promote Very Practical Assessments

Qualitative interview participants reported that it would be burdensome to implement an assessment. They reported that adding an assessment would take additional time during an often already long and sometimes overwhelming consent process. Any intervention aimed at increasing the use of assessments may need to focus on very practical and brief assessments, and inform researchers of their brevity. For example, the UBACC can be administered in as few as 5 minutes by Bachelor’s level research staff with minimal training [Reference Jeste, Palmer and Appelbaum8].

Qualitative Methods Are an Important Supplement to a Survey when Studying Barriers

Notably, there were a few discrepancies between our quantitative survey data and qualitative interview data. For example, qualitative interview participants reported that time burden was a barrier to using an assessment, but survey participants did not frequently report not having time to make changes to study protocols. Additionally, qualitative interview participants reported that study participants would be a barrier to implementing assessments, but quantitative survey participants did not frequently report study participants as a barrier. These discrepancies show the need to use both qualitative and quantitative methods when investigating the barriers to implementation. Using both methods provided a fuller picture of the barriers associated with adopting validated assessments.

Further Research Is Needed to Understand Researcher and Participant Experiences with Assessments

There are numerous benefits of routinely assessing participant understanding of consent information. However, there may be times when assessment is inappropriate. For example, researchers in the qualitative interview sample were concerned that using assessments will cause discomfort or frustration among study participants. Does an assessment do more harm than good when a potential participant appears to clearly lack decisional capacity—will it cause foreseeable embarrassment or discomfort? This is an important concern, given that the population that may be most at risk of frustration are those with undiagnosed cognitive impairments who may be unable to answer the questions and thus score poorly on the assessment. Screening for this group is one of the main reasons for using an assessment, but further research is needed to provide evidence-based methods of administering assessments in a manner that would not cause frustration.

Furthermore, it is likely that not all greater-than-minimal-risk studies require the use of an assessment. Which study designs, while technically greater than minimal risk, are so simple and safe that assessment would be an unnecessary burden? Further research is needed to identify these types of studies so that assessments can be implemented when needed, and not when they are unnecessary.

Implementation Science Frameworks Are Crucial to Identifying and Overcoming Barriers to Adopting New Practices

Our study was guided by the CFIR implementation science framework. The majority of the barriers identified fall under the CFIR individual domain, suggesting that characteristics about researchers’ knowledge and attitudes are driving the lack of adoption of assessments. Given that attitudes and confidence were significant predictors of adopting assessments and researchers reported a lack of awareness of assessments, addressing researchers’ attitudes and knowledge is essential to increasing adoption. Thus, as already described, providing education and training on assessments will be a necessary component of any intervention aimed at increasing their use. Other barriers, such as the burden associated with implementing assessments, and seeing the research team and study participants as barriers fall under the CFIR inner domain. These are aspects internal to trials that affect the implementation of assessments. Overcoming these inner domain barriers may require training on available assessments (including those that might be low burden), education to help researchers select an assessment that would be the best fit for their team, and training on how to administer an assessment in a way that minimizes potential distress for study participants.

Limitations

Our survey relied upon self-report data because it would have been impossible to access actual institutional records (protocols, materials, and IRB records). We controlled for socially desirable responding in some of our analyses, but it is possible that participants overreported their use of validated consent assessments. Finally, our sample was largely White and female. There are no available demographic data on CRCs in the USA; however, the sample is consistent with prior studies of CRCs and may be representative of the clinical research professional workforce in the USA [Reference Mozersky, Antes, Baldwin, Jenkerson and DuBois50, Reference Kolb, Kuang and Behar-Horenstein51].

Conclusion

Assessing participants understanding and appreciation of consent information are currently infrequent in the USA. Increasing the use of this consent practice is important, because both participants and research professionals frequently overestimate how well participants understand research, and experts often vary in their determinations of participants’ capacity to consent [Reference Palmer and Harmell5Reference Jeste, Palmer and Appelbaum8]. Furthermore, trials that are open to older adults (age 65+) or those with dementia or other cognitive impairment are particularly at risk of enrolling participants who do not understand and appreciate consent information [Reference Karlawish9Reference Prusaczyk, Cheney, Carpenter and DuBois17]. Increasing the frequency with which researchers use validated assessments of consent is an important undertaking: it will increase the ethicality of clinical trials being conducted in the USA (especially as increasing number of older adults are included in the clinical research [20]) and may expedite IRB approval [Reference Stark27].

Acknowledgments

The project was supported by National Institute on Aging grant 5R01AG058254 (EDS, JM, KB, MPW, MVP, MG, JMD) and the National Center for Advancing Translational Sciences grant UL1TR002345 (JM, KB, JMD).

Disclosures

The authors have no conflicts of interest to declare.

References

Faden, RR, Beauchamp, TL. A History and Theory of Informed Consent. New York: Oxford University Press, 1986.Google Scholar
Department of Health and Human Services. Federal Policy for the Protection of Human Subjects (45 CFR 46). In: United States Office of the Federal Registrar, ed2018.Google Scholar
Grisso, T, Appelbaum, PS. Assessing Competence to Consent to Treatment: A Guide for Physicians and Other Health Professionals. New York: Oxford University Press, 1998.Google Scholar
Appelbaum, PS, Roth, LH. Competency to consent to research: a psychiatric overview. Archives of General Psychiatry 1982; 39(8): 951958.CrossRefGoogle ScholarPubMed
Palmer, BW, Harmell, AL. Assessment of Healthcare Decision-making Capacity. Archives of Clinical Neuropsychology 2016; 31(6): 530540.CrossRefGoogle ScholarPubMed
Sessums, LL, Zembrzuska, H, Jackson, JL. Does This Patient Have Medical Decision-Making Capacity? JAMA 2011; 306(4): 420427.CrossRefGoogle ScholarPubMed
Montalvo, W, Larson, E. Participant Comprehension of Research for Which They Volunteer: A Systematic Review. Journal of Nursing Scholarship 2014; 46(6): 423431.CrossRefGoogle ScholarPubMed
Jeste, DV, Palmer, BW, Appelbaum, PS, et al. A New Brief Instrument for Assessing Decisional Capacity for Clinical Research. Archives of General Psychiatry 2007; 64(8): 966974.CrossRefGoogle ScholarPubMed
Karlawish, JH, et al. The Ability of Persons with Alzheimer Disease (AD) to Make a Decision About Taking an AD Treatment. Neurology 2005; 64(9): 15141519.CrossRefGoogle ScholarPubMed
Kim, SY, Cox, C, Caine, ED. Impaired decision-making ability in subjects with Alzheimer’s disease and willingness to participate in research. American Journal of Psychiatry 2002; 159(5): 797802.CrossRefGoogle ScholarPubMed
Depp, CA, Moore, DJ, Sitzer, D, et al. Neurocognitive Impairment in Middle-Aged and Older Adults with Bipolar Disorder: Comparison to Schizophrenia and Normal Comparison Subjects. Journal of Affective Disorders 2007; 101(1–3): 201209.CrossRefGoogle ScholarPubMed
Stewart, RA, Liolitsa, D. Type 2 Diabetes Mellitus, Cognitive Impairment and Dementia. Diabetic Medicine 1998; 16: 93112.CrossRefGoogle Scholar
Schmidt, R, Fazekas, F, Offenbacher, H, et al. Magnetic Resonance Imaging White Matter Lesions and Cognitive Impairment in Hypertensive Individuals. Archrives of Neurology 1991; 48: 417420.CrossRefGoogle ScholarPubMed
Vogels, RLC, Scheltens, P, Schroeder-Tanka, JM, Weinstein, HC. Cognitive Impairment in Heart Failure: A Systematic Review of the Literature. European Journal of Heart Failure 2007; 9: 440449.CrossRefGoogle ScholarPubMed
Jeste, DV, Depp, CA, Palmer, BW. Magnitude of Impairment in Decisional Capacity in People with Schizophrenia Compared to Normal Subjects: An Overview. Schizophrenia Bulletin 2006; 32(1): 121128.CrossRefGoogle ScholarPubMed
Plassman, BL, Langa, KM, Fisher, GG, et al. Prevalence of cognitive impairment without dementia in the United States. Annals of Internal Medicine 2008; 148(6): 427434.CrossRefGoogle ScholarPubMed
Prusaczyk, B, Cheney, SM, Carpenter, CR, DuBois, JM . Informed Consent to Research with Cognitively Impaired Adults: Transdisciplinary Challenges and Opportunities. Clinical Gerontologist 2017; 40(1): 6373.CrossRefGoogle ScholarPubMed
Barstow, C, Shahan, B, Roberts, M. Evaluating Medical Decision-Making Capacity in Practice. American Family Physician 2018; 98(1): 4046.Google ScholarPubMed
Young, AJ, Kim, L, Li, S, et al. Agency and Communication Challenges in Discussions of Informed Consent in Pediatric Cancer Research. Qualitative Health Research 2010; 20(5): 628643.CrossRefGoogle ScholarPubMed
National Institutes of Health. Inclusion across the lifespan [Internet], 2019 [cited July 9, 2020]. (https://grants.nih.gov/policy/inclusion/lifespan.htm)Google Scholar
Joffe, S, Cook, EF, Cleary, PD, Clark, JW, Weeks, JC . Quality of Informed Consent: a New Measure of Understanding Among Research Subjects. JNCI Journal of the National Cancer Institute 2001; 93(2): 139147.CrossRefGoogle ScholarPubMed
Miller, CK, O’Donnell, DC, Searight, HR, Barbarash, RA . The Deaconess Informed Consent Comprehension Test: an assessment tool for clinical research subjects. Pharmacotherapy 1996; 16(5): 872878.Google ScholarPubMed
Appelbaum, PS, Grisso, T. MacCAT-CR: MacArthur Competence Assessment Tool for Clinical Research. Sarasota, FL: Professional Resource Press, 2001.Google Scholar
Palmer, BW, Dunn, LB, Appelbaum, PS, et al. Assessment of Capacity to Consent to Research Among Older Persons with Schizophrenia, Alzheimer Disease, or Diabetes Mellitus: Comparison of a 3-Item Questionnaire with a Comprehensive Standardized Capacity Instrument. Archives of General Psychiatry 2005; 62(7): 726733.CrossRefGoogle ScholarPubMed
Paasche-Orlow, MK, Brancati, FL, Taylor, HA, Jain, S, Pandit, A, Wolf, M . Readability of Consent Form Templates: A Second Look. IRB 2013; 35(4): 1219.Google ScholarPubMed
Kass, NE, Chaisson, L, Taylor, HA, Lohse, J. Length and complexity of US and international HIV consent forms from federal HIV network trials. Journal of General Internal Medicine 2011; 26(11): 13241328.CrossRefGoogle ScholarPubMed
Stark, L. Behind closed doors: IRBs and the making of ethical research. Chicago: University of Chicago Press, 2012.Google Scholar
Dunn, LB, Nowrangi, MA, Palmer, BW, Jeste, DV, Saks, ER. Assessing Decisional Capacity for Clinical Research or Treatment: A Review of Instruments. The American Journal of Psychiatry 2006; 163(8): 13231334.CrossRefGoogle ScholarPubMed
Damschroder, LJ, Aron, DC, Keith, RE, Kirsh, SR, Alexander, JA, Lowery, JC. Fostering Implementation of Health services Research Findings into Practice: a Consolidated Framework for Advancing Implementation Science. Implementation Science 2009; 4: 15.CrossRefGoogle ScholarPubMed
Eccles, MP, Mittman, BS. Welcome to Implementation Science. Implementation Science 2006; 1(1): 1.CrossRefGoogle Scholar
Graham, ID, Logan, J. Innovations in Knowledge Transfer and Continuity of Care. Canadian Journal of Nursing Research 2004; 36(2): 89103.Google ScholarPubMed
Pettigrew, A, Whipp, R. Managing Change and Corporate Performance. European Industrial Restructuring in the 1990s. London: Palgrave Macmillan, 1992.Google Scholar
Fixsen, D, Naoom, SF, Blase, KA, Friedman, RM, Wallace, F. Implementation Research: A Synthesis of the Literature. 2005.Google Scholar
Stetler, CB. Updating the Stetler Model of Research Utilization to Facilitate Evidence-Based Practice. Nursing Outlook 2001; 49(6): 272279.CrossRefGoogle ScholarPubMed
Mendel, P, Meredith, LS, Schoenbaum, M, Sherbourne, CD, Wells, KB. Interventions in Organizational and Community Context: A Framework for Building Evidence on Dissemination and Implementation in Health Services Research. Administration and Policy in Mental Health 2008; 35(1–2): 2137.CrossRefGoogle ScholarPubMed
Kilbourne, AM, Neumann, MS, Pincus, HA, Bauer, MS, Stall, R. Implementing Evidence-based Interventions in Health Care: Application of the Replicating Effective Programs Framework. Implementation Science 2007; 2(1): 42.CrossRefGoogle ScholarPubMed
Kitson, A, Ahmed, LB, Harvey, G, Seers, K, Thompson, DR. From Research to Practice: One Organizational Model for Promoting Research-based Practice. Journal of Advanced Nursing 1996; 23(3): 430440.CrossRefGoogle ScholarPubMed
Creswell, JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 4th ed. Thousand Oaks: SAGE Publications, 2014.Google Scholar
DuBois, JM, Chibnall, JT, Tait, RC, et al. Professional Decision-Making in Research (PDR): The Validity of a New Measure. Science and Engineering Ethics 2015; 22(2): 391416.CrossRefGoogle ScholarPubMed
DuBois, JM, Antes, AL. Five Dimensions of Research Ethics: A Stakeholder Framework for Creating a Climate of Research Integrity. Academic Medicine: Journal of the Association of American Medical Colleges 2018; 93(4): 550555.CrossRefGoogle ScholarPubMed
English, T, Antes, AL, Baldwin, KA, DuBois, JM. Development and preliminary validation of a new measure of values in scientific work. Science and Engineering Ethics 2018; 24(2): 393418.CrossRefGoogle Scholar
DuBois, JM, Chibnall, JT, Gibbs, JC. Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics 2015; 22(4): 965.CrossRefGoogle ScholarPubMed
Crowne, DP, Marlowe, D. A New Scale of Social Desirability Independent of Psychopathology. Journal of Consulting Psychology 1960; 24(4): 349.CrossRefGoogle ScholarPubMed
Clinical Trials Transformation Initiative. Improving Public Access to Aggregate Content of ClinicalTrials.gov [Internet], 2016 [cited July 9, 2020]. (https://aact.ctti-clinicaltrials.org/)Google Scholar
Leiner, DJ. Too Fast, Too Straight, Too Weird: Non-Reactive Indicators for Meaningless Data in Internet Surveys. Survey Research Methods 2019; 13(3): 229248.Google Scholar
National Institute on Aging. Alzheimer’s Disease Research Centers. National Institutes of Health [Internet], 2019 [cited July 9, 2020]. (https://www.nia.nih.gov/health/alzheimers-disease-research-centers)Google Scholar
Association of American Medical Colleges. AAMC Medical School Members [Internet] 2020 [cited July 9, 2020]. (https://members.aamc.org/eweb/DynamicPage.aspx?site=AA4MC&webcode=AAMCOrgSearchResult&orgtype=Medical%20School)Google Scholar
Saldaña, J. The Coding Manual for Qualitative Researchers. 3 ed. Thousand Oaks CA: Sage Publications Ltd., 2016.Google Scholar
Thompson, DS, Estabrooks, CA, Scott-Findlay, S, Moore, K, Wallin, L . Interventions Aimed at Increasing Research Use in Nursing: A Systematic Review. Implementation Science 2007; 2: 15.CrossRefGoogle ScholarPubMed
Mozersky, JT, Antes, AL, Baldwin, K, Jenkerson, M, DuBois, JM . How do clinical research coordinators learn good clinical practice? A mixed methods study of factors that predict uptake of knowledge. Clinical Trials 2020; 17(2): 166175.CrossRefGoogle ScholarPubMed
Kolb, HR, Kuang, H, Behar-Horenstein, LS. Clinical research coordinators’ instructional preferences for competency content delivery. Journal of Clinical Translational Science 2018; 2(4): 217222.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Demographic characteristics of quantitative survey and qualitative interview samples

Figure 1

Table 2. Means and standard deviations of quantitative survey sample

Figure 2

Fig. 1. Frequency of Assessment Adoption among Quantitative Survey Sample Participants Who Submitted ≥ 1 Greater-Than-Minimal-Risk Protocol in the Prior Year (N = 936).Note. Numbers in figure are number of participants falling into each range of the adoption variable, and how many of the non-adopters reported that the change had already been made by either the study sponsor or another member of their research team. Only 73% (N = 936) of the sample is represented here because only 73% had submitted at least 1 greater-than-minimal-risk protocol to the IRB in the past year (which was the denominator for the adoption variable calculation) and had a valid personal adoption score (i.e., 0–100%).

Figure 3

Table 3. Regression analyses predicting adoption in the quantitative survey sample

Figure 4

Table 4. Reasons assessments were not adopted and perceived barriers to adoption in quantitative survey sample

Figure 5

Table 5 Perceived barriers to adoption of practices indicated by qualitative interview participants