Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-24T18:22:03.867Z Has data issue: false hasContentIssue false

12 - Quality Control

Assessing Reliability and Validity

from Part III - Methodology and Procedures of Interaction Analysis

Published online by Cambridge University Press:  19 July 2018

Elisabeth Brauner
Affiliation:
Brooklyn College, City University of New York
Margarete Boos
Affiliation:
University of Göttingen
Michaela Kolbe
Affiliation:
ETH Zürich
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Atkinson, P., Coffey, A., Delamont, S., Lofland, J., & Lofland, L. (2001). Handbook of Ethnography. London, UK: Sage.CrossRefGoogle Scholar
Bakeman, R., Deckner, D. F., & Quera, V. (2005). Analysis of behavioral streams. In Teti, D. M. (Ed.), Handbook of research methods in developmental science (pp. 394420). Oxford, UK: Blackwell Publishers.CrossRefGoogle Scholar
Bakeman, R., & Quera, V. (2011). Sequential analysis and observational methods for the behavioral sciences. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Bales, R. F. (1950). Interaction process analysis: A method for the study of small groups. Oxford, UK: Addison-Wesley.Google Scholar
Bornstein, R. F. (1993). Mere exposure effects with outgroup stimuli. In Mackie, D. & Hamilton, D. (Eds.), Affect, cognition, and stereotyping: Interactive processes in group perception (pp. 195211). San Diego, CA: Academic Press.CrossRefGoogle Scholar
Brown, R. D., & Hauenstein, N. M. (2005). Interrater agreement reconsidered: An alternative to the rwg indices. Organizational Research Methods, 8(2), 165184. doi:10.1177/1094428105275376CrossRefGoogle Scholar
Carthey, J., de Leval, M. R., Wright, D. J., Farewell, V. T., & Reason, J. T. (2003). Behavioural markers of surgical excellence. Safety Science, 41(5), 409425. doi:10.1016/S0925-7535(01)00076-5CrossRefGoogle Scholar
Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284290. doi:10.1037/1040-3590.6.4.284CrossRefGoogle Scholar
Cicchetti, D. V., & Feinstein, A. R. (1990). High agreement but low kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 43(6), 551558. doi:10.1016/0895-4356(90)90159-MCrossRefGoogle ScholarPubMed
Clayton, M. J. (1997). Delphi: A technique to harness expert opinion for critical decision‐making tasks in education. Educational Psychology, 17(4), 373386. doi:10.1080/0144341970170401CrossRefGoogle Scholar
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychosocial Measurement, 20, 3746. doi:10.1177/001316446002000104CrossRefGoogle Scholar
Cook, D. A., & Beckman, T. J. (2006). Current concepts in validity and reliability for psychometric instruments: theory and application. The American Journal of Medicine, 119(2), 166.e7166.e16. doi:10.1016/j.amjmed.2005.10.036CrossRefGoogle ScholarPubMed
Craig, R. T. (1981). Generalization of Scott’s index of intercoder agreement. Public Opinion Quarterly, 45(2), 260264. doi:10.1086/268657CrossRefGoogle Scholar
Crossley, J., Marriott, J., Purdie, H., & Beard, J. (2011). Prospective observational study to evaluate NOTSS (Non-Technical Skills for Surgeons) for assessing trainees’ non‐technical performance in the operating theatre. British Journal of Surgery, 98(7), 10101020. doi:10.1002/bjs.7478CrossRefGoogle ScholarPubMed
Cunningham, W. A., Preacher, K. J., & Banaji, M. R. (2001). Implicit attitude measures: Consistency, stability, and convergent validity. Psychological Science, 12(2), 163170. doi:10.1111/1467-9280.00328CrossRefGoogle ScholarPubMed
Dooley, K. (2001). Social research methods (4th ed.). Upper Saddle River, NJ: Prentice Hall.Google Scholar
Emerson, R. M., Fretz, R. I., & Shaw, L. L. (2011). Writing ethnographic fieldnotes (2nd edn.). Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Feinstein, A. R., & Cicchetti, D. V. (1990). High agreement but low kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology, 43(6), 543549. doi:10.1016/0895-4356(90)90158-LCrossRefGoogle ScholarPubMed
Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378382. doi:10.1037/h0031619CrossRefGoogle Scholar
Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The Qualitative Report, 8(4), 597606.Google Scholar
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 2334. doi:10.20982/tqmp.08.1.p023CrossRefGoogle ScholarPubMed
Hammersley, M., & Atkinson, P. (2007). Ethnography: Principles in practice (3rd edn.). London, UK: Routledge.Google Scholar
Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 7789. doi:10.1080/19312450709336664CrossRefGoogle Scholar
Howitt, D., & Cramer, D. (2011). Introduction to research methods in psychology (3rd edn.). Harlow, UK: Pearson Education.Google Scholar
Hull, L., Arora, S., Kassab, E., Kneebone, R., & Sevdalis, N. (2011). Observational teamwork assessment for surgery: content validation and tool refinement. Journal of the American College of Surgeons, 212(2), 234243. doi:10.1016/j.jamcollsurg.2010.11.001CrossRefGoogle ScholarPubMed
Hull, L., Arora, S., Symons, N. R., Jalil, R., Darzi, A., Vincent, C., & Sevdalis, N. (2013). Training faculty in nontechnical skill assessment: National guidelines on program requirements. Annals of Surgery, 258(2), 370375. doi:10.1097/SLA.0b013e318279560bCrossRefGoogle ScholarPubMed
IBM. (2013). IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corporation.Google Scholar
James, L. R., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability with and without response bias. Journal of Applied Psychology, 69(1), 8598. doi:10.1037/0021-9010.69.1.85CrossRefGoogle Scholar
Jha, A. S. (2014). Social research methods. New Delhi, India: McGraw Hill Education.Google Scholar
Kline, P. (2015). A handbook of test construction: Introduction to psychometric design. London, UK: Routledge.Google Scholar
Klonek, F. E., Quera, V., & Kauffeld, S. (2015). Coding interactions in motivational interviewing with computer-software: What are the advantages for process researchers? Computers in Human Behavior, 44, 284292. doi:10.1016/j.chb.2014.10.034CrossRefGoogle Scholar
Kolbe, M., Burtscher, M. J., & Manser, T. (2013). Co-ACT – a framework for observing coordination behaviour in acute care teams. BMJ Quality and Safety in Health Care, 22, 596605. doi:10.1136/bmjqs-2012-001319CrossRefGoogle ScholarPubMed
Kolbe, M., Grote, G., Waller, M. J., Wacker, J., Grande, B., Burtscher, M. J., & Spahn, D. R. (2014). Monitoring and talking to the room: Autochthonous coordination patterns in team interaction and performance. Journal of Applied Psychology, 99(6), 12541267. doi:10.1037/a0037877CrossRefGoogle Scholar
Krippendorff, K. (2004a). Content analysis: An introduction to its methodology. (2nd edn.). Thousand Oaks, CA: Sage.Google Scholar
Krippendorff, K. (2004b). Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30(3), 411433. doi:10.1111/j.1468-2958.2004.tb00738.xGoogle Scholar
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159174. doi:10.2307/2529310CrossRefGoogle ScholarPubMed
LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods, 11, 815852. doi:10.1177/1094428106296642CrossRefGoogle Scholar
Lehmann-Willenbrock, N., Meinecke, A. L., Rowold, J., & Kauffeld, S. (2015). How transformational leadership works during team interactions: A behavioral process analysis. The Leadership Quarterly, 26(6), 10171033. doi:10.1016/j.leaqua.2015.07.003CrossRefGoogle Scholar
Mangold. (2014). INTERACT Benutzerhandbuch. Mangold International GmbH (Hrsg.). Retrieved February 16, 2018, from www.mangold-international.com/de/Google Scholar
Martin, P., & Bateson, P. (2007). Measuring behavior, an introductory guide. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
McGrath, J. E., & Altermatt, T. W. (1999). Observation and analysis of group interaction over time: Some methodological and strategic choices. In Hogg, M. & Tindale, R. (Eds.), Blackwell handbook of social psychology: Group processes (pp. 525556). London, UK: Blackwell Publishing.Google Scholar
McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276282. doi:10.11613/BM.2012.031CrossRefGoogle ScholarPubMed
Newman, I., & Newman, C. (1994). Conceptual statistics for beginners. Lanham, MA: University Press of America.Google Scholar
Researchware. (2002). HyperRESEARCH. Randolph, MA: ResearchWare, Inc.Google Scholar
Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. The Urban Review, 3(1), 1620. doi:10.1007/BF02322211CrossRefGoogle Scholar
Schmutz, J., & Manser, T. (2013). Do team processes really have an effect on clinical performance? A systematic literature review. British Journal of Anaesthesia, 110, 529544. doi:10.1093/bja/aes513CrossRefGoogle ScholarPubMed
Scott, W. A. (1955). Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly, 19(3), 321325. doi:10.1086/266577CrossRefGoogle Scholar
Seelandt, J. C., Tschan, F., Keller, S., Beldi, G., Jenni, N., Kurmann, A., & Semmer, N. K. (2014). Assessing distractors and teamwork during surgery: Developing an event-based method for direct observation. BMJ Quality & Safety in Healthcare, 23, 918929. doi:10.1136/bmjqs-2014-002860CrossRefGoogle ScholarPubMed
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420428. doi:10.1037/0033-2909.86.2.420CrossRefGoogle ScholarPubMed
Tinsley, H. E., & Weiss, D. J. (1975). Interrater reliability and agreement of subjective judgments. Journal of Counseling Psychology, 22(4), 358376. doi:10.1037/h0076640CrossRefGoogle Scholar
Tschan, F., Seelandt, J., Keller, S., Semmer, N., Kurmann, A., Candinas, D., & Beldi, G. (2015). Impact of case‐relevant and case‐irrelevant communication within the surgical team on surgical‐site infection. British Journal of Surgery, 102(13), 17181725. doi:10.1002/bjs.9927CrossRefGoogle Scholar
Weingart, L. R. (1997). How did they do that? Research in Organizational Behavior, 19, 189239.Google Scholar
Weingart, L. R., Olekalns, M., & Smith, P. L. (2004). Quantitative coding of negotiation behavior. International Negotiation, 9(3), 441456. doi:10.1163/1571806053498805CrossRefGoogle Scholar
Yoder, P., & Symons, F. (2010). Observational measurement of behavior. New York, NY: Springer.Google Scholar
Yule, S., Flin, R., Maran, N., Rowley, D., Youngson, G., Duncan, J., & Paterson-Brown, S. (2009). Development and evaluation of the NOTTS behavior rating system for intraoperative surgery. In Flin, R. & Mitchell, L. (Eds.), Safer surgery. Analysing behaviour in the operating theatre (pp. 726). London, UK: Ashgate.Google Scholar
Yule, S., Flin, R., Paterson-Brown, S., Maran, N., & Rowley, D. (2006). Development of a rating system for surgeons’ non-technical skills. Medical Education, 40(11), 10981104. doi:10.1111/j.1365-2929.2006.02610.xCrossRefGoogle ScholarPubMed
Zijlstra, F. R., Waller, M. J., & Phillips, S. I. (2012). Setting the tone: Early interaction patterns in swift-starting teams as a predictor of effectiveness. European Journal of Work and Organizational Psychology, 21(5), 749777. doi:10.1080/1359432X.2012.690399CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×