Hostname: page-component-76fb5796d-25wd4 Total loading time: 0 Render date: 2024-04-25T10:10:28.063Z Has data issue: false hasContentIssue false

An assessment of inter-rater agreement of the literature filtering process in the development of evidence-based dietary guidelines

Published online by Cambridge University Press:  02 January 2007

Marcia Cooper*
Affiliation:
Division of Gastroenterology and Nutrition, Program in Metabolism, Research Institute, The Hospital for Sick Children, Toronto, Ontario, Canada Department of Nutritional Sciences, University of Toronto, Toronto, Ontario, Canada
Wendy Ungar
Affiliation:
Population Health Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
Stanley Zlotkin
Affiliation:
Division of Gastroenterology and Nutrition, Program in Metabolism, Research Institute, The Hospital for Sick Children, Toronto, Ontario, Canada Departments of Paediatrics and Nutritional Sciences, University of Toronto, Toronto, Ontario, Canada Centre for International Health, University of Toronto, Toronto, Ontario, Canada
*
*Corresponding author: Email marcia_cooper@hc-sc.gc.ca
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Objective

To determine whether the literature filtering process, a vital initial component of a systematic literature review, could be successfully completed by nutrition professionals or non-professionals.

Design

Using a diet–disease relationship as the guideline topic, inter-rater agreement for the title and abstract filtering processes between and among professionals and non-professionals was assessed and compared with an expert reference standard. Predetermined eligibility criteria were applied by all raters to 185 titles and 90 abstracts. Filtering decisions were initially made independently and then revised after a within-pair consensus meeting.

Subjects

The raters were six dietitians (RD) and six nutrition graduate students (Grad). To assess inter-rater agreement (reliability), each group was divided into three pairs.

Results

Weighted and unweighted kappa statistics and percentage agreement were calculated to determine the inter-rater agreement within pairs. Sensitivity and specificity estimates were determined by comparing responses with those of an expert reference standard. Overall, Grad pairs demonstrated greater inter-rater agreement than RD pairs for title filtering (P < 0.05); no differences were observed for abstract filtering. Compared with the expert reference standard, every rater and pair had false-negative responses for both title and abstract filtering.

Conclusions

After consensus meetings, both RDs and Grads were comparable in their agreement on title and abstract filtering, although important differences remained compared with the expert reference standard. This study provides preliminary findings on the value of utilising a non-expert pair in developing guidelines, and suggests that the literature filtering process is complex and quite subjective.

Type
Research Article
Copyright
Copyright © The Authors 2006

References

1Cochrane Collaboration Handbook. In: Mulrow, CD, Oxman, AD, eds. The Cochrane Library, 4th ed [database on disk and CD-ROM]. Oxford: The Cochrane Collaboration, Oxford Update Software, 1997.Google Scholar
2Splett, P. Developing and Validating Evidence-Based Guides for Practice: A Tool Kit for Dietetics Professionals. Chicago, IL: American Dietetic Association, 1999.Google Scholar
3Brunner, E, Rayner, M, Thorogood, M, Margetts, B, Hooper, L, Summerbell, C, et al. Making public health nutrition relevant to evidence-based action. Public Health Nutrition 2001; 4: 1297–9.CrossRefGoogle ScholarPubMed
4Streiner, DL. Learning how to differ: agreement and reliability statistics in psychiatry. Canadian Journal of Psychiatry 1995; 40: 60–6.Google ScholarPubMed
5Haas, M. Statistical methodology for reliability studies. Journal of Manipulative and Physiological Therapeutics 1991; 14: 1932.Google ScholarPubMed
6Cohen, J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 1960; 20: 3746.CrossRefGoogle Scholar
7Fitzmaurice, G. Statistical methods for assessing agreement. Nutrition 2002; 18: 694–6.CrossRefGoogle ScholarPubMed
8Rigby, AS. Statistical methods in epidemiology. V. Towards an understanding of the kappa coefficient. Disability and Rehabilitation 2000; 22: 339–44.CrossRefGoogle ScholarPubMed
9Cicchetti, DV, Allison, T. A new procedure for assessing reliability of scoring EEG sleep recordings. American Journal of EEG Technology 1971; 11: 101–9.CrossRefGoogle Scholar
10Landis, JR, Koch, JJ. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–74.CrossRefGoogle ScholarPubMed
11Fletcher, RH, Fletcher, SW, Wagner, EH. Clinical Epidemiology: The Essentials, 2nd ed. London: Williams and Wilkins, 1988.Google Scholar
12Gordis, L. Epidemiology. Philadelphia, PA: WB Saunders Co., 2000.Google Scholar
13Cooper, MJ, Zlotkin, SH. An evidence-based approach to the development of national dietary guidelines. Journal of the American Dietetic Association 2003; 103: S2833.CrossRefGoogle Scholar
14Margetts, BM, Vorster, HH, Venter, CS. Evidence-based nutrition. South African Journal of Clinical Nutrition 2002; 15: 712.Google Scholar
15Oxman, AD, Guyatt, GH, Cook, DJ, Jaeschke, R, Heddle, N, Keller, J. An index of scientific quality for health reports in the lay press. Journal of Clinical Epidemiology 1993; 46: 9871001.CrossRefGoogle ScholarPubMed
16Arrive, L, Renard, R, Carrat, F, Belkacem, A, Dahan, H, Le Hir, P. A scale of methodological quality for clinical studies of radiologic examinations. Radiology 2000; 217: 6974.CrossRefGoogle ScholarPubMed
17Moher, D, Cook, DJ, Eastwood, S, Olkin, I, Rennie, D, Stroup, DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999; 354: 1896–900.CrossRefGoogle ScholarPubMed
18Moher, D, Schulz, KF, Altman, DG; CONSORT group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Journal of the American Podiatric Medical Association 2001; 91: 437–42.CrossRefGoogle Scholar
19Sands, ML, Murphy, JR. Use of kappa statistic in determining validity of quality filtering for meta-analysis: a case study of the health effects of electromagnetic radiation. Journal of Clinical Epidemiology 1996; 49: 1045–51.CrossRefGoogle ScholarPubMed