Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-25T07:04:26.441Z Has data issue: false hasContentIssue false

Chapter 4 - Trust and Human Factors

Foundations of Trust in Automation

from Part I - Fundamental Level of Trust

Published online by Cambridge University Press:  09 December 2021

Frank Krueger
Affiliation:
George Mason University, Virginia
Get access

Summary

This chapter discusses the human factors foundations of trust, specifically human-automation trust. Trust in automation can be conceptualized as a three-factor model consisting of the human trustor, the automated trustee, and the environment or context. In this model, qualities of the human (such as experience), work with qualities of the robot (such as form) in an environment that also influences the nature of the interaction. Since trust is constantly evolving, time itself is also a facet of trust in human-automation interactions. Measurement of trust is challenging because trust itself is a latent variable, and not directly observable. However, measurement is necessary to ensure trust is appropriately calibrated and there is not a mismatch between the trustors’ expectations, and the trustees’ capabilities. Trust measures include self-report or survey-type measures, behavioral observations, and biological measures.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Akash, K., Hu, W. L., Jain, N., & Reid, T. (2018). A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems, 8(4), Article 27. https://doi.org/10.1145/3132743Google Scholar
Alaiad, A., & Zhou, L. (2014). The determinants of home healthcare robots adoption: An empirical investigation. International Journal of Medical Informatics, 83(11), 825840. https://doi.org/10.1016/j.ijmedinf.2014.07.003Google Scholar
Asch, S. E. (1951). Effects of group pressure on the modification and distortion of judgments. In Guetzkow, H. (Ed.), Groups, leadership and men: Research in human relations (pp. 177190). Carnegie Press.Google Scholar
Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193(5), 3135.CrossRefGoogle Scholar
Bellucci, G., Chernyak, S. V., Goodyear, K., Eickhoff, S. B., & Krueger, F. (2017). Neural signatures of trust in reciprocity: A coordinate-based meta-analysis. Human Brain Mapping, 38(3), 12331248. https://doi.org/10.1002/hbm.23451Google Scholar
Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social-history. Games and Economic Behavior, 10(1), 122142. https://doi.org/10.1006/game.1995.1027Google Scholar
Biros, D. P., Daly, M., & Gunsch, G. (2004). The influence of task load and automation trust on deception detection. Group Decision and Negotiation, 13(2), 173189. https://doi.org/10.1023/B:GRUP.0000021840.85686.57Google Scholar
Blau, P. M. (1964). Exchange and power in social life. Wiley.Google Scholar
Block, J. (2001). Millennial contrarianism: The five-factor approach to personality description 5 years later. Journal of Research in Personality, 35(1), 98107. https://doi.org/10.1006/jrpe.2000.2293CrossRefGoogle Scholar
Chen, J. Y. C., & Terrence, P. I. (2009). Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics, 52(8), 907920. https://doi.org/10.1080/00140130802680773Google Scholar
Chien, S. Y., Lewis, M., Sycara, K., Liu, J. S., & Kumru, A. (2016). Relation between trust attitudes toward automation, Hofstede’s cultural dimensions, and big five personality traits. Proceedings of the Human Factors and Ergonomics Society, 840–844. https://doi.org/10.1177/1541931213601192Google Scholar
Costa, A. C. (2003). Work team trust and effectiveness. Personnel Review, 32(5), 605622. https://doi.org/10.1108/00483480310488360Google Scholar
De Visser, E. J., Beatty, P. J., Estepp, J. R., et al. (2018). Learning from the slips of others: Neural correlates of trust in automated agents. Frontiers in Human Neuroscience, 12(309), 115. https://doi.org/10.3389/fnhum.2018.00309Google Scholar
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. 8th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’13, 251–258. https://doi.org/10.1109/HRI.2013.6483596CrossRefGoogle Scholar
Dijkstra, J. (1995). The influence of an expert system on the user’s view: How to fool a lawyer. New Review of Applied Expert Systems, 1, 123138.Google Scholar
Dijkstra, J. (1999). User agreement with incorrect expert system advice. Behaviour & Information Technology, 18(6), 399411. https://doi.org/10.1080/014492999118832CrossRefGoogle Scholar
Djamasbi, S., Siegel, M., Tullis, T., & Dai, R. (2010). Efficiency, trust, and visual appeal: Usability testing through eye tracking. Proceedings of the Annual Hawaii International Conference on System Sciences, 1–10. https://doi.org/10.1109/HICSS.2010.171CrossRefGoogle Scholar
Drnec, K., Marathe, A. R., Lukos, J. R., & Metcalfe, J. S. (2016). From trust in automation to decision neuroscience: Applying cognitive neuroscience methods to understand and improve interaction decisions involved in human automation interaction. Frontiers in Human Neuroscience, 10(290), 114. https://doi.org/10.3389/fnhum.2016.00290Google Scholar
Earley, P. C. (1988). Computer-generated performance feedback in the magazine-subscription industry. Organizational Behavior and Human Decision Processes, 41(1), 5064. https://doi.org/10.1016/0749-5978(88)90046-5Google Scholar
Ellis, L. U., Sims, V. K., Chin, M. G., et al. (2005). Those a-maze-ing robots: Attributions of ability are based on form, not behavior. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(3), 598601. https://doi.org/10.1177/154193120504900382Google Scholar
Endsley, M. R., & Kaber, D. B. (1999). Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462492. https://doi.org/10.1080/001401399185595CrossRefGoogle Scholar
Erebak, S., & Turgut, T. (2019). Caregivers’ attitudes toward potential robot coworkers in elder care. Cognition, Technology and Work, 21(2), 327336. https://doi.org/10.1007/s10111–018-0512-0CrossRefGoogle Scholar
Evers, V., Maldonado, H. C., Brodecki, T. L., & Hinds, P. J. (2008). Relational vs. group self-construal. 3rd ACM/IEEE International Conference on Human-Robot Interaction, HRI ’08, 255–262. https://doi.org/10.1145/1349822.1349856CrossRefGoogle Scholar
Goetz, J., Kiesler, S., & Powers, A. (2003). Matching robot appearance and behavior to tasks to improve human-robot cooperation. IEEE International Workshop on Robot and Human Interactive Communication, 55–60. https://doi.org/10.1109/ROMAN.2003.1251796Google Scholar
Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74(6), 14641480. https://doi.org/10.1037/0022-3514.74.6.1464Google Scholar
Groom, V., Srinivasan, V., Bethel, C. L., Murphy, R., Dole, L., & Nass, C. (2011). Responses to robot social roles and social role framing. International Conference on Collaboration Technologies and Systems, 194–203. https://doi.org/10.1109/CTS.2011.5928687Google Scholar
Hancock, P. A. (1996). Effects of control order, augmented feedback, input device and practice on tracking performance and perceived workload. Ergonomics, 39(9), 11461162. https://doi.org/10.1080/00140139608964535CrossRefGoogle ScholarPubMed
Hancock, P. A. (2013). In search of vigilance: The problem of iatrogenically created psychological phenomena. The American Psychologist, 68(2), 97109. https://doi.org/10.1037/a0030214Google Scholar
Hancock, P. A. (2019). Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics, 62(4), 479495. https://doi.org/10.1080/00140139.2018.1498136Google Scholar
Hancock, P. A., Billings, D. R., & Schaefer, K. E. (2011). Can you trust your robot? Ergonomics in Design, 19(3), 2429. https://doi.org/10.1177/1064804611415045Google Scholar
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517527. https://doi.org/10.1177/0018720811417254Google Scholar
Hancock, P. A., Kessler, T. T., Kaplan, A. D., Brill, J. C., & Szalma, J. L. (2020). Evolving trust in robots: Specification through sequential and comparative meta-analyses. Human Factors, 62(4). https://doi.org/https://doi.org/10.1177/0018720820922080Google Scholar
Haring, K. S., Matsumoto, Y., & Watanabe, K. (2013). How do people perceive and trust a lifelike robot. Lecture Notes in Engineering and Computer Science, 1, 425430.Google Scholar
Heerink, M. (2011). Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. 6th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’11, 147–148. https://doi.org/10.1145/1957656.1957704Google Scholar
Hertz, N., & Wiese, E. (2018). Under pressure: Examining social conformity with computer and robot groups. Human Factors, 60(8), 12071218. https://doi.org/10.1177/0018720818788473Google Scholar
Hubert, M., Hubert, M., Linzmajer, M., Riedl, R., & Kenning, P. (2018). Trust me if you can: Neurophysiological insights on the influence of consumer impulsiveness on trustworthiness evaluations in online settings. European Journal of Marketing, 52(1–2), 118146. https://doi.org/10.1108/EJM-12-2016-0870Google Scholar
Jenkins, Q., & Jiang, X. (2010). Measuring trust and application of eye tracking in human robotic interaction. IIE Annual Conference Proceedings, 1.Google Scholar
Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 5371. https://doi.org/10.1207/S15327566IJCE0401_04CrossRefGoogle Scholar
Joosse, M., Lohse, M., Perez, J. G., & Evers, V. (2013). What you do is who you are: The role of task context in perceived social robot personality. Proceedings of the IEEE International Conference on Robotics and Automation, 2134–2139. https://doi.org/10.1109/ICRA.2013.6630863Google Scholar
Kaplan, A. D., Kessler, T. T., Sanders, T. L., Cruit, J., Brill, J. C., & Hancock, P. A. (2020). Time to trust: Trust as a function of time in human-robot interaction. In Nam, C. & Lyons, J. (Eds.), Trust in human-robot interaction (pp. 143159). Academic Press.Google Scholar
Kessler, T., Stowers, K., Brill, J. C., & Hancock, P. A. (2017). Comparisons of human-human trust with other forms of human-technology trust. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 13031307. https://doi.org/10.1177/1541931213601808CrossRefGoogle Scholar
Khawaji, A., Zhou, J., Chen, F., & Marcus, N. (2015). Using galvanic skin response (GSR) to measure trust and cognitive load in the text-chat environment. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA15, 19891994. https://doi.org/10.1145/2702613.2732766Google Scholar
Koffka, K. (1935). Principles of gestalt psychology. Routledge.Google Scholar
Lee, J. D., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 12431270. https://doi.org/10.1080/00140139208967392Google Scholar
Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40(1), 153184. https://doi.org/10.1006/ijhc.1994.1007Google Scholar
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 5080. https://doi.org/10.1518/hfes.46.1.50_30392Google Scholar
Li, D., Rau, P. L. P., & Li, Y. (2010). A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, 2(2), 175186. https://doi.org/10.1007/s12369–010-0056-9Google Scholar
Linderman, T. (2018). Using an orange to fool Tesla’s autopilot is probably a really bad ideas. Motherboard: Tech by Vice. www.vice.com/en_us/article/a3na9p/tesla-autosteer-orange-hackGoogle Scholar
Looije, R., Neerincx, M. A., & Cnossen, F. (2010). Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies, 68(6), 386397. https://doi.org/10.1016/j.ijhcs.2009.08.007CrossRefGoogle Scholar
McBride, M., & Morgan, S. (2010). Trust calibration for automated decision aids. Research Brief, 9, 111.Google Scholar
Merritt, S. M. (2011). Affective processes in human-automation interactions. Human Factors, 53(4), 356370. https://doi.org/10.1177/0018720811411912Google Scholar
Mori, M. (2012). The uncanny valley. IEEE Robotics and Automation Magazine, 19(2), 98100. https://doi.org/10.1109/MRA.2012.2192811Google Scholar
National Highway Traffic Safety Administration. (2020). Driver assistance technologies. United States Department of Transportation. www.nhtsa.gov/equipment/driver-assistance-technologiesGoogle Scholar
National Transportation Safety Board. (2017). Collision between a car operating with automated vehicle control systems and a tractor-semitrailer truck. Highway Accident Report.Google Scholar
Natsoulas, T. (1967). What are perceptual reports about? Psychological Bulletin, 67(4), 249272. https://doi.org/10.1037/h0024320Google Scholar
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006). Measurement of negative attitudes toward robots. Interaction Studies, 7(3), 437454. https://doi.org/10.1075/is.7.3.14nomGoogle Scholar
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230253. https://doi.org/10.1518/001872097778543886CrossRefGoogle Scholar
Phillips, E., Ullman, D., De Graaf, M. M. A., & Malle, B. F. (2017). What does a robot look like?: A multi-site examination of user expectations about robot appearance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 12151219. https://doi.org/10.1177/1541931213601786CrossRefGoogle Scholar
Reeder, G. D., & Brewer, M. B. (1979). A schematic model of dispositional attribution in interpersonal perception. Psychological Review, 86(1), 6179. https://doi.org/10.1037/0033-295X.86.1.61Google Scholar
Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95112. https://doi.org/10.1037/0022-3514.49.1.95Google Scholar
Riedl, R., & Javor, A. (2012). The biology of trust: Integrating evidence from genetics, endocrinology, and functional brain imaging. Journal of Neuroscience, Psychology, and Economics, 5(2), 6391. https://doi.org/10.1037/a0026318Google Scholar
Riedl, R., Mohr, P., Kenning, P., Davis, F., & Heekeren, H. (2014). Trusting humans and avatars: A brain imaging study based on evolution theory. Journal of Management Information Systems, 30(4), 83114. https://doi.org/10.2753/MIS0742–1222300404Google Scholar
Sanders, T. L., Kaplan, A. P., Koch, R., Schwartz, M., & Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors, 61(4), 614626. https://doi.org/10.1177/0018720818816838Google Scholar
Schaefer, K. E. (2013). The perception and measurement of human-robot trust [Doctoral dissertation]. University of Central Florida.Google Scholar
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377400. https://doi.org/10.1177/0018720816634228Google Scholar
Scopelliti, M., Giuliani, M. V., & Fornara, F. (2005). Robots in a domestic setting: A psychological approach. Universal Access in the Information Society, 4(2), 146155. https://doi.org/10.1007/s10209–005-0118-1Google Scholar
Singh, I. L., Molloy, R., & Parasuraman, R. (1993). Automation-induced “complacency”: Development of the complacency-potential rating scale. The International Journal of Aviation Psychology, 3(2), 111122. https://doi.org/10.1207/s15327108ijap0302_2Google Scholar
Szalma, J. L., & Taylor, G. S. (2011). Individual differences in response to automation: The five-factor model of personality. Journal of Experimental Psychology: Applied, 17(2), 7196. https://doi.org/10.1037/a0024170Google Scholar
Tsui, K. M., Desai, M., & Yanco, H. A. (2010). Considering the bystander’s perspective for indirect human-robot interaction. 5th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’10, 129–130. https://doi.org/10.1145/1734454.1734506Google Scholar
Volante, W. G., Sosna, J., Kessler, T., Sanders, T. L., & Hancock, P. A. (2019). Social conformity effects on trust in simulation-based human-robot interaction. Human Factors, 61(5), 805815. https://doi.org/10.1177/0018720818811190Google Scholar
Wang, L., Rau, P. L. P., Evers, V., Robinson, B. K., & Hinds, P. (2010). When in Rome: The role of culture and context in adherence to robot recommendations. 5th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’10, 359–366. https://doi.org/10.1145/1734454.1734578CrossRefGoogle Scholar
Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The development of a human-robot interaction trust scale. International Journal of Social Robotics, 4(3), 235248. https://doi.org/10.1007/s12369–012-0144-0CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×