Hostname: page-component-8448b6f56d-wq2xx Total loading time: 0 Render date: 2024-04-25T05:20:27.268Z Has data issue: false hasContentIssue false

How to conduct implementation trials and multicentre studies in the emergency department

Published online by Cambridge University Press:  30 January 2018

Ian G. Stiell*
Affiliation:
Department of Emergency Medicine, University of Ottawa, Ottawa, ON Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON
Jeffrey J. Perry
Affiliation:
Department of Emergency Medicine, University of Ottawa, Ottawa, ON Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON
Jamie Brehaut
Affiliation:
Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON
Erica Brown
Affiliation:
Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON
Janet A. Curran
Affiliation:
School of Nursing, Dalhousie University, Halifax, NS
Marcel Emond
Affiliation:
Department of Family Medicine and Emergency Medicine, CHU de Quebec-Université Laval, Quebec, QC
Corinne Hohl
Affiliation:
Department of Emergency Medicine, University of British Columbia, Vancouver, BC
Monica Taljaard
Affiliation:
Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON
Andrew D. McRae
Affiliation:
Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON Department of Emergency Medicine, University of Calgary, Calgary, AB.
*
Correspondence to: Dr. Ian Stiell, Clinical Epidemiology Unit, F6, 1053 Carling Avenue, Ottawa, ON K1Y 4E9; Email: istiell@ohri.ca

Abstract

Objective

The objective of Panel 2b was to present an overview of and recommendations for the conduct of implementation trials and multicentre studies in emergency medicine.

Methods

Panel members engaged methodologists to discuss the design and conduct of implementation and multicentre studies. We also conducted semi-structured interviews with 37 Canadian adult and pediatric emergency medicine researchers to elicit barriers and facilitators to conducting these kinds of studies.

Results

Responses were organized by themes, and, based on these responses, recommendations were developed and refined in an iterative fashion by panel members.

Conclusions

We offer eight recommendations to facilitate multicentre clinical and implementation studies, along with guidance for conducting implementation research in the emergency department. Recommendations for multicentre studies reflect the importance of local study investigators and champions, requirements for research infrastructure and staffing, and the cooperation and communication between the coordinating centre and participating sites.

Résumé

Objectif

Le groupe de travail 2b avait pour buts de donner un aperçu de la marche des essais de mise en œuvre et des études multicentriques effectués au service des urgences (SU), et d’élaborer des recommandations en la matière.

Méthodes

Les membres du groupe ont fait appel à des spécialistes de la méthodologie afin d’échanger des idées sur la conception des études sur la mise en œuvre et des études multicentriques ainsi que sur la marche à suivre. Les premiers ont également réalisé des entrevues semi-structurées avec 37 chercheurs canadiens en médecine d’urgence tant adulte que pédiatrique dans le but de faire ressortir les obstacles à la réalisation de ce type d’études ainsi que les facteurs facilitants.

Résultats

L’équipe a groupé les réponses par thème, élaboré des recommandations en tenant compte des réponses reçues, puis amélioré ces recommandations selon un processus itératif.

Conclusions

Le groupe présente huit recommandations visant à faciliter la marche des études cliniques multicentriques ou des études sur la mise en œuvre, et donne des indications sur la manière d’effectuer de la recherche sur la mise en œuvre au SU. Les recommandations sur les études multicentriques tiennent compte de l’importance des chercheurs et des chefs de file locaux, des exigences relatives aux infrastructures et au personnel nécessaires à la recherche ainsi que de la nécessité d’établir une bonne coopération et des communications fréquentes entre les centres de coordination et les centres de recherche participants.

Type
CAEP Academic Symposium Papers
Copyright
Copyright © Canadian Association of Emergency Physicians 2018 

INTRODUCTION

Implementation research refers to the scientific study of methods to promote the uptake of research findings into routine healthcare. This may include an evaluation of the patient and health system impact of translating evidence-based practices into real-world settings, as well as the more traditional knowledge translation (KT) studies that evaluate strategies to promote uptake of clinical research evidence into practice. Front-line clinician engagement is essential to the successful implementation and evaluation of evidence-based clinical practices.

This paper reviews recommendations from Panel 2b for the engagement of clinicians in implementation trials and multicentre studies in Canadian emergency departments (EDs) (Box 1, Box 2). The target audience of these recommendations is both new and experienced clinician scientists seeking to implement new knowledge into practice and to evaluate its effect.

Box 1 Recommendations for implementation trials

  1. 1) Understand how these differ from drug trials.

  2. 2) Understand different study designs, for example, cluster randomized, stepped-wedge, before-after.

  3. 3) Understand sample size and statistical issues, and engage an experienced biostatistician.

  4. 4) Understand ethical challenges.

  5. 5) Understand tips for engaging physicians.

  6. 6) Identify barriers and facilitators prior to starting the trial.

Box 2 Recommendations for conducting multicentre studies

  1. 1) Identify a strong local champion at each site.

  2. 2) Ensure that site has adequate staffing for study.

  3. 3) Complete the paperwork: ethics, data-sharing agreement, budget.

  4. 4) Have a startup meeting with investigators and research staff.

  5. 5) Track enrolment and compliance carefully at each site using charts and graphs.

  6. 6) Communicate regularly with each site by conference call, emails, and newsletters.

  7. 7) Principal investigator should present at grand rounds at each new site to encourage participation.

  8. 8) Use incentives and draws to encourage the compliance of physicians and nurses.

METHODS

The expert panel included four emergency medicine clinician-scientists, a PhD psychologist and a PhD nurse with expertise in KT and clinician behavior, and a PhD biostatistician with expertise in implementation studies. This was the same group that produced recommendations related to engaging clinicians in clinical research studies (Panel 2a).

The panel used a combination of interviews and focus groups (N = 15), as well as email discussions, involving 38 emergency medicine clinician-scientists from across Canada. We sought input on barriers and facilitators with respect to clinician engagement in clinical and implementation research in the ED. The interviews were conducted over a 3-month period, usually in groups of three to five researchers. Responses were grouped into themes, which formed the basis of our recommendations. Recommendations were revised in an iterative fashion by the panel members after discussion during conference calls and by email.

RECOMMENDATIONS

Facilitating implementation trials in the ED

Understand the definition and examples.

We define implementation trials as comparative studies where the intervention may be complex and requires a change in behavior by emergency workers such as physicians, nurses, and paramedics.Reference Stiell and Bennett 1 This makes such trials more difficult to conduct than simpler studies (e.g., drug trials) where there is a single concrete intervention. Examples of complex trials abound in the emergency medicine literature such as studies of paramedics implementing a termination of resuscitation protocol,Reference Morrison, Eby and Veigas 2 ED teams implementing pediatric clinical pathways,Reference Jabbour, Curran and Scott 3 or physicians implementing a decision rule to restrict the use of cervical spine imaging.Reference Stiell, Clement and Grimshaw 4

Understand study designs.

Because the risk of contamination is very high in studies that require behavior change, implementation trials typically randomize by cluster (e.g., by hospital) rather than by the individual patient. Cluster randomized trials have a number of other advantages and may be more acceptable administratively than individual patient trials.Reference Dainty, Scales and Brooks 5 , Reference Stiell, Nichol and Leroux 6 The commonly used types of cluster trials include parallel designs (sites are randomly allocated to intervention or control), cross-over (where sites periodically cross from intervention to control, and vice versa), or stepped wedge (sites randomly move from control to intervention at set time intervals).Reference Ivers, Schwalm and Grimshaw 7

Understand sample size and statistical issues.

Estimating sample size and planning analyses for cluster trials are very complex and require the assistance of an experienced biostatistician. The sample size calculation and analysis plan must account for the fact that individuals are randomized as a group; as a result, their responses are not statistically independent but are correlated. In addition to the standard requirements of power, alpha level, control proportion, and minimal clinically importance difference, the statistical advisor will need to determine the intra-class correlation expected amongst sites.Reference Donner and Klar 8 , Reference Hooper and Bourke 9 Cluster randomized trials also require an adequate number of sites to be feasible, even if a very large number of observations are available from each site.Reference Taljaard, Teerenstra and Ivers 10 , Reference Hemming, Eldridge and Forbes 11 The design of a cluster trial might require balancing the need to have an adequate number of randomization units against the need to minimize the risks of contamination.

Understand ethical considerations.

Implementation studies, particularly those using a cluster-randomized design, raise unique ethical challenges that do not exist in individually randomized or observational studies.Reference Weijer, Grimshaw and Eccles 12 , Reference Taljaard, Weijer and Grimshaw 13 Firstly, the use of a cluster-randomized design must be justified. Cluster-randomized trials are more complex and require larger sample sizes than individually randomized studies. The use of this design must be necessary for either methodological or logistical reasons.Reference Weijer, Grimshaw and Eccles 12 , Reference Taljaard, Weijer and Grimshaw 13 Secondly, it can be more challenging to identify who is a research subject. Human participants are those who are intervened upon, have their environment manipulated, or interact directly with the research team, or who contribute identifiable private data.Reference Weijer, Grimshaw and Eccles 12 15 In an implementation study, this might include patients, but not necessarily. Healthcare providers may be considered research subjects if they are the targets of a KT or quality improvement intervention.Reference Jabbour, Curran and Scott 3 , Reference Stiell, Clement and Grimshaw 4 Thirdly, it can be challenging to determine when, if at all, informed consent is required from subjects. An implementation study may be eligible for a waiver of consent from some or all subjects if the study interventions pose no more than minimal risk to subjects and if the study is not feasible without a waiver of consent. Not all implementation trials are necessarily eligible for a waiver, but those with interventions resembling routine or evidence-based care may be eligible.Reference Jabbour, Curran and Scott 3 , Reference Stiell, Clement and Grimshaw 4 Finally, risks and potential benefits of study interventions must be considered with respect to the kind of subject who is targeted by a particular intervention. For example, the risks and benefits to a healthcare provider must be considered if they are the target of a KT intervention, whereas the risks and benefits to patients are considered when patient-level data are used to ascertain trial outcomes.Reference Jabbour, Curran and Scott 3 , Reference Stiell, Clement and Grimshaw 4

Engage the physicians.

A multi-pronged approach must be used as described in the Panel 2a paper by McRae.Reference McRae, Perry and Breahut 16 These include frequent communications, incentives, use of an enthusiastic local champion, audit and feedback of individual and site performance, targeting of learners, having research staff present in the ED, and having the intervention built into existing procedures. Examples of the latter might include computer pop-up instructions for imaging or cardioversion protocols built into the electronic medical record. Compliance with study procedures may be a secondary outcome measure.

Identify barriers and facilitators.

Very important to the success of an implementation trial is identification of local barriers and facilitators, prior to the study launch. This can be done informally or by use of a formal structure that uses the Theoretical Domains Framework.Reference Clement, Stiell and Lowe 17 , Reference Curran, Brehaut and Patey 18 This process starts with qualitative work involving interviews of various stakeholders such as physicians, nurses, and managers. The results of the interviews may be used to create a closed caption survey of all stakeholders at the site to comprehensively identify barriers and facilitators to the implementation of the intervention in question. Theory-informed approaches to the assessment of barriers and drivers to practice behavior change are recommended, as is the systematic development of interventions specifically designed to address the relevant barriers.Reference French, Green and O’Connor 19

Recommendations for conducting multicentre studies

The most compelling clinical evidence comes from research data collected from multiple sites. Although the yield of multicentre studies is great, they are much more difficult and expensive to conduct than single-site studies. Such studies almost always require significant funding.

Identify a strong local champion at each site.

The principal investigator must seek out a very strong local champion to push the process forward in his or her ED. Such individuals must be easy to reach, responsible, and respected enough to influence the behavior of colleagues. The departmental chief may not be the ideal champion if he or she is not wholly committed to the project or too busy.

Ensure that site has adequate staffing for study.

Good sites for multicentre studies usually should have research infrastructure and experience enrolling patients in the ED. This requires a minimum of one full-time research coordinator or manager, ideally assisted by several research assistants.

Complete the paperwork: ethics, budget, data-sharing agreement.

A surprising amount of paperwork must be completed at each site, including the research ethics process, agreement on a budget with contract, and a data-sharing agreement. This will take time.

Have a startup meeting with investigators and research staff.

Goodwill, communication, and compliance will be greatly enhanced by holding startup meetings that include the principal investigator, coordinating centre staff, site investigators, and site staff. Such meetings can be costly but are well worth the expense. Sometimes these can be tagged onto an existing meeting such as the Canadian Association of Emergency Physicians (CAEP) annual conference. A less costly approach to startup meetings is to have the principal investigator and coordinator visit each site, one by one.

Track enrolment and compliance carefully at each site using charts and graphs.

This is a crucial step and must commence at the outset of the study to ensure that protocols are being followed, documentation and records are complete and accurate, and that patient enrolment is satisfactory. We encourage the submission of screening and enrolment logs on a regular, for example, biweekly basis. Key source documents should be reviewed by the coordinating site staff. As the study progresses, enrolment charts that compare sites to each other are useful tools for providing feedback.

Communicate regularly with each site by conference call, emails, and newsletters.

Regular communication with site staff is essential to follow up on issues with enrolment, documentation, and timeliness. Site-specific conference calls are optimal, and these can be supplemented by frequent emails as well as regular newsletters sent to all sites.

The principal investigator should present at grand rounds at each new site to encourage participation.

The launch of the study at a new site is ideally preceded by a presentation at grand rounds and staff meetings, ideally by the principal investigator. A host of communication tools should be used to inform all ED staff about the launch and progress of the study, including signs, emails, and research staff presence in the ED.

Use incentives and draws to encourage the compliance of physicians and nurses.

Busy ED physicians and nurses naturally prefer to avoid the additional burden of completing forms or talking to research staff. We have had very good luck by motivating staff through the use of simple coffee gift cards or monthly draws for a dinner gift certificate. These small efforts buy goodwill throughout the department.

Acknowledgements: We gratefully acknowledge all of the respondents who contributed their input on barriers and facilitators with respect to engaging healthcare professionals in clinical and implementation research in emergency departments, and their recommendations on the optimal conduct of implementation studies.

Competing interests: None declared.

References

REFERENCES

1. Stiell, IG, Bennett, C. Implementation of clinical decision rules in the emergency department. Acad Emerg Med 2007;14:955-959.Google Scholar
2. Morrison, LJ, Eby, D, Veigas, PV, et al. Implementation trial of the basic life support termination of resuscitation rule: reducing the transport of futile out-of-hospital cardiac arrests. Resuscitation 2014;85(4):486-491.Google Scholar
3. Jabbour, M, Curran, J, Scott, SD, et al. Best strategies to implement clinical pathways in an emergency department setting: study protocol for a cluster randomized controlled trial. Implement Sci 2013;8:55.Google Scholar
4. Stiell, IG, Clement, CM, Grimshaw, J, et al. Implementation of the Canadian C-spine rule: prospective 12-centre cluster randomised trial. Br Med J 2009;339:B4146.Google Scholar
5. Dainty, KN, Scales, DC, Brooks, SC, et al. A knowledge translation collaborative to improve the use of therapeutic hypothermia in post-cardiac arrest patients: protocol for a stepped wedge randomized trial. Implement Sci 2011;6(4):1-7.Google Scholar
6. Stiell, IG, Nichol, G, Leroux, B, et al. Early versus later rhythm analysis in patients with out-of-hospital cardiac arrest. N Engl J Med 2011;365(9):787-797.Google Scholar
7. Ivers, NM, Schwalm, JD, Grimshaw, JM, et al. Impact of CONSORT extension for cluster randomised trials on quality of reporting and study methodology: review of random sample of 300 trials, 2000-8. BMJ 2011;343:d5886.Google Scholar
8. Donner, A, Klar, N. Design and analysis of cluster randomization trials in health research. London: Arnold Publishing, Hodder Headline Group; 2000.Google Scholar
9. Hooper, R, Bourke, L. Cluster randomised trials with repeated cross sections: alternatives to parallel group designs. BMJ 2015;350:h2925.Google Scholar
10. Taljaard, M, Teerenstra, S, Ivers, NM, et al. Substantial risks associated with few clusters in cluster randomized and stepped wedge designs. Clin Trials 2016;13(4):459-463.Google Scholar
11. Hemming, K, Eldridge, S, Forbes, G, et al. How to design efficient cluster randomised trials. BMJ 2017;14(358):j3064.Google Scholar
12. Weijer, C, Grimshaw, JM, Eccles, MP, et al. The Ottawa Statement on the ethical design and conduct of cluster randomized trials. PLoS Med 2012;9(11):e1001346.Google Scholar
13. Taljaard, M, Weijer, C, Grimshaw, JM, et al. Ottawa Ethics of Cluster Randomized Trials Consensus Group. The Ottawa Statement on the ethical design and conduct of cluster randomised trials: precis for researchers and research ethics committees. BMJ 2013;346:f2838.Google Scholar
14. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: ethical conduct for research involving humans. Ottawa, ON: Government of Canada; 2014.Google Scholar
15. U.S. Department of Health and Human Services. Department of Health and Human Services Rules and Regulations: Protection of Human Subjects. Title 45, Code of Federal Regulations (CFR), Part 46; 1991.Google Scholar
16. McRae, AD, Perry, JJ, Breahut, J, et al. Engaging emergency clinicians in emergency department clinical research. CJEM 2018; epub, doi: 10.1017/cem.2017.434.Google Scholar
17. Clement, CM, Stiell, IG, Lowe, MA, et al. Facilitators and barriers to application of the Canadian C-spine rule by emergency department triage nurses in large teaching hospitals. Int Emerg Nurs 2016;27:24-30.Google Scholar
18. Curran, JA, Brehaut, J, Patey, AM, et al. Understanding the Canadian adult CT head rule trial: use of the theoretical domains framework for process evaluation. Implement Sci 2013;8:25.Google Scholar
19. French, SD, Green, SE, O’Connor, DA, et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci 2012;7:38.Google Scholar