Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-19T12:55:28.640Z Has data issue: false hasContentIssue false

Evidence, knowledge, implementation: glossary of terminology

Published online by Cambridge University Press:  19 January 2017

Rights & Permissions [Opens in a new window]

Abstract

Type
Editorials in This Issue
Copyright
Copyright © Cambridge University Press 2017 

In this issue of Epidemiology and Psychiatric Sciences two Editorials discuss the theoretical and practical issues related to the process of guideline development and implementation in mental health. Two different perspectives are critically analysed. Salvador Carulla et al. (Reference Salvador-Carulla, Lukersmith and Sullivan2017) consider guidelines as implementation tools, and therefore better implementation is seen as a consequence of better guideline development. Becker et al. (Reference Becker, Kilian and Kösters2017) consider guidelines as tools to be implemented, and therefore better implementation is seen as a consequence of better knowledge about the efficacy of different guideline implementation strategies.

In this presentation, the key terms evidence, knowledge and implementation are briefly described with respect to their role in the guideline production and implementation process, aiming to provide reference information to better contextualise the two Editorials.

EVIDENCE = Evidence may be anything presented in support of an assertion. Scientific evidence consists of experimental results that serve to support, refute or modify a scientific hypothesis or theory, when collected and interpreted in accordance with the scientific method. The model of evidence-based medicine (EBM) suggests that scientific evidence should be taken into careful consideration when individual- or population-level recommendations (guidelines) or decisions are formulated (Sackett et al. Reference Sackett, Rosenberg, Gray, Haynes and Richardson1996). Salvador Carulla et al. (Reference Salvador-Carulla, Lukersmith and Sullivan2017) suggest that in the EBM model randomised controlled trials (RCTs) are considered the highest quality, whilst observational evidence is devalued, but actually a more genuine interpretation of the EBM model is that depending on the questions being addressed, different types of scientific evidence may be considered at the pinnacle of the evidence-based hierarchy. For example, it is widely acknowledged that systematic reviews of RCTs represent the most reliable and appropriate reference standard to summarise the efficacy of interventions, but for safety outcomes observational studies and systematic reviews of these studies are usually considered at the pinnacle of the evidence hierarchy, to be used to inform clinical practice and policy decisions (Vandenbroucke, Reference Vandenbroucke2008). Therefore, one key point with reference to the concept of scientific evidence is that the question being asked determines the appropriate research architecture and strategy to be used (Sackett & Wennberg, Reference Sackett and Wennberg1997). This is perfectly in agreement with the principles of the EBM model as it was originally conceptualised (Sackett & Wennberg, Reference Sackett and Wennberg1997).

KNOWLEDGE = The limits of the sole evidence base in informing the process of guideline development have been extensively acknowledged (Greenhalgh et al. Reference Greenhalgh, Howick and Maskrey2014). The motto ‘evidence does not make decisions, people do’ (Sackett et al. Reference Sackett, Rosenberg, Gray, Haynes and Richardson1996) is a clear methodological lesson indicating that scientific evidence is one of several domains of knowledge that should inform decisions and recommendations. Knowledge usually refers to a theoretical or practical understanding of a subject. It can be implicit or explicit, more or less formal or systematic. The process of developing treatment recommendations, therefore, should give consideration to the evidence base, as conceptualised by the EBM model, and to other forms of knowledge, including aspects that may differ according to local context variables, such as values, preferences and feasibility issues. This is so important that one of the most well-developed approaches to produce evidence-based guidelines, the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology (Barbui et al. Reference Barbui, Dua, Harper, Tablante, Thornicroft and Saxena2015), strongly recommends to consider, and transparently report using a structured template, the contribution not only of the evidence base but also of all additional and context factors considered, including values, preferences and feasibility issues. For example, it is possible that professionals who develop a guideline may want to check how the evidence base in favour or against a specific intervention matches with some a priori values that are considered key reference aspects. In mental healthcare examples of values include promotion of social inclusion, prevention of discrimination and stigma, prevention of medicalisation of social problems. There is extensive knowledge on several of these aspects that clearly cannot be ignored and may be responsible for producing recommendations that are not fully in line with the background evidence. The World Health Organization (WHO), for example, has been criticised for some ‘strong’ recommendations based on ‘weak’ evidence (Alexander et al. Reference Alexander, Bero, Montori, Brito, Stoltzfus, Djulbegovic, Neumann, Rave and Guyatt2014), but these recommendations were strong on the basis of additional knowledge that was carefully considered and transparently reported. Feasibility issues are also key pragmatic factors, as often there is high quality evidence in favour of interventions that are not feasible in specific settings of care. In the WHO Evidence Resource Centre, there are interesting examples of how the process of guideline production can be simultaneously informed by the evidence base and by other sources of knowledge that are given similar value and weight (World Health Organization, 2015). This aspect is of paramount relevance, especially if guidelines are conceptualised as implementation tools, as suggested by Salvador-Carulla et al. (Reference Salvador-Carulla, Lukersmith and Sullivan2017), because it means that the implementability of guidelines may depend on whether and how these context factors are taken into account.

IMPLEMENTATION = Implementation is defined as a specified set of activities designed to put into practice a program. Becker et al. (Reference Becker, Kilian and Kösters2017) argue that current knowledge about how guidelines should be implemented is sparse and inconclusive in mental health care. This is indeed the case. A recent Cochrane review, which summarised the available experimental evidence on the efficacy of guideline implementation strategies in improving process and patient outcomes in specialist mental health care, included six randomised trials (Bighelli et al. Reference Bighelli, Ostuzzi, Girlanda, Cipriani, Becker, Koesters and Barbui2017). Although single studies provided initial evidence that implementation of treatment guidelines may achieve small changes in mental health practice, it highlighted a gap in knowledge, with scant information available for people with mental health problems, health professionals and policy-makers. This aspect is of paramount relevance, especially if guidelines are conceptualised as tools to be implemented, as suggested by Becker et al. (Reference Becker, Kilian and Kösters2017), because it means that we still do not know which guideline implementation strategies are effective and cost-effective in the long-term. In other fields of health care, interventions that may promote guideline use include educational activities, social engagement, clinical support systems, incentives, audit and feedback exercises (Gagliardi, Reference Gagliardi2012). An interesting aspect is that guideline implementation should be an iterative process, in the sense that following implementation, guideline use and outcomes should be monitored, and the findings used to inform ongoing quality improvement efforts, as the ultimate goal of any implementation activity is continuous quality improvement.

Acknowledgements

None.

Financial support

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Conflict of Interest

None.

References

Alexander, PE, Bero, L, Montori, VM, Brito, JP, Stoltzfus, R, Djulbegovic, B, Neumann, I, Rave, S, Guyatt, G (2014). World Health Organization recommendations are often strong based on low confidence in effect estimates. Journal of Clinical Epidemiology 67, 629634.CrossRefGoogle ScholarPubMed
Barbui, C, Dua, T, Harper, M, Tablante, EC, Thornicroft, G, Saxena, S (2015). Using GRADE to update WHO recommendations for MNS. Lancet Psychiatry 2, 10541056.Google Scholar
Becker, T, Kilian, R, Kösters, (2017). Policies, guideline implementation and practice change – how can the process be understood? Epidemiology and Psychiatric Sciences, this issue. doi:10.1017/S2045796016000706.Google Scholar
Bighelli, I, Ostuzzi, G, Girlanda, F, Cipriani, A, Becker, T, Koesters, M, Barbui, C (2017). Implementation of treatment guidelines for specialist mental health care. Cochrane Database of Systematic Reviews (in press).Google Scholar
Gagliardi, AR (2012). Translating knowledge to practice: optimizing the use of guidelines. Epidemiology and Psychiatric Sciences 21, 231236.Google Scholar
Greenhalgh, T, Howick, J, Maskrey, N (2014). Evidence based medicine: a movement in crisis? BMJ 348, g3725.CrossRefGoogle ScholarPubMed
Sackett, DL, Rosenberg, WM, Gray, JA, Haynes, RB, Richardson, WS (1996). Evidence based medicine: what it is and what it isn't. BMJ 312, 7172.Google Scholar
Sackett, DL, Wennberg, JE (1997). Choosing the best research design for each question. BMJ 315, 1636.Google Scholar
Salvador-Carulla, L, Lukersmith, S, Sullivan, W (2017). From the EBM pyramid to the Greek temple: a new conceptual approach to Guidelines as implementation tools in mental health. Epidemiology and Psychiatric Sciences, this issue. doi:10.1017/S2045796016000767.Google Scholar
Vandenbroucke, JP (2008). Observational research, randomised trials, and two views of medical science. PLoS Medicine 5, e67.Google Scholar
World Health Organization (2015). mhGAP Evidence Resource Centre. http://www.who.int/mental_health/mhgap/evidence/en/ Google Scholar