To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To facilitate moving from research findings to conclusions when conducting systematic reviews (SRs) and health technology assessments (HTAs), evidence grading systems (EGSs) have been developed to assess the quality of bodies of evidence and communicate (un)certainty about the effects of evaluated technologies. Use of EGSs has become an essential step in conducting SRs and HTAs and those relying on review conclusions should be aware of EGSs’ potential limitations.
This study aims to identify EGSs used in SR and HTA practice, and summarize findings on their inter-rater reliability (IRR). Relevant sources were searched to identify EGSs used in recently published SRs and IRR studies of available EGSs. Members of the International Network of Agencies for Health Technology Assessment were surveyed regarding their current approaches.
Preliminary results indicate that only two conceptually similar EGSs are currently used by several organizations in SR and HTA practice: (i) the Grading of Recommendations Assessment, Development and Evaluation (GRADE) and (ii) the Agency for Healthcare Research and Quality Evidence-based Practice Center Program (AHRQ-EPC). Both approaches emphasize a structured and transparent method. However, results from published IRR studies suggest there is a risk for variability in their application due to researchers’ diverse levels of training and experience in using them, and the complexity and heterogeneity of evidence in SRs.
Validated EGSs can play a critical role in whether and how research findings are eventually translated into practice. However, our results indicate a low level of uptake of EGSs in HTA practice. Both currently used EGSs are susceptible to misuse that allows different researchers to grade the same body of evidence differently, and their performance has not been robustly explored in terms of IRR. If these results stand up to replication, one cannot rely on conclusions of published SRs, which has implications for the decisions they inform.
Thorough documentation and clear reporting are essential when conducting a comprehensive literature search for a health technology assessment (HTA) or systematic review. The ultimate goal of this process is transparency and reproducibility with the added benefit of increasing the reader's confidence in the research. Thorough documentation of the search also allows for critical appraisal of the methodology used and facilitates future updating of a review (1,2).
It has been found that large numbers of systematic review searches are inadequately documented and there is little consensus on best practices for reporting standards (3).
As part of the SuRe Info Project, we conducted a review of all current reporting standards relevant to HTAs and systematic reviews in addition to looking at the published literature on this topic in order to synthesize the evidence in this area and create a standard set of agreed upon recommendations.
We conducted a comprehensive search of Medline, Embase, and LISA (Library & Info Studies Abstracts) databases. We also examined the Equator Network (http://www.equator-network.org/) website. Reference lists of included studies and reporting guidelines were also consulted. Eleven reporting guidelines and eight studies were included in the review by two independent reviewers. Anything published before 2006, that was not a research article (other than the guidelines), and/or that did not provide new recommendations (that is, a review of another set of recommendations) was excluded.
After collecting data on the suggested reporting elements described in the literature, we pooled our results to create an overarching list of the most commonly recommended elements to describe and the most commonly recommended methods to use when documenting a comprehensive search. Not only did these elements pertain to documenting the search strategy for the final report, but they also pertained to the protocol and the abstract of a review.
It is hoped that this overview of the literature and compilation of the evidence will clarify some of the confusion that seems to exist when documenting and reporting searches and perhaps it will even help to reduce the existence of poorly described strategies in the research literature.
Email your librarian or administrator to recommend adding this to your organisation's collection.