Hostname: page-component-848d4c4894-4rdrl Total loading time: 0 Render date: 2024-07-06T14:45:37.611Z Has data issue: false hasContentIssue false

Trait attribution explains human–robot interactions

Published online by Cambridge University Press:  05 April 2023

Yochanan E. Bigman
Affiliation:
The Hebrew University Business School, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel yochanan.bigman@mail.huji.ac.il https://ybigman.wixsite.com/ybigman
Nicholas Surdel
Affiliation:
Department of Psychology, Yale University, New Haven, CT 06520-8205, USA nicholas.surdel@yale.edu melissaj.ferguson@gmail.com https://www.linkedin.com/in/nsurdel www.fergusonlab.com
Melissa J. Ferguson
Affiliation:
Department of Psychology, Yale University, New Haven, CT 06520-8205, USA nicholas.surdel@yale.edu melissaj.ferguson@gmail.com https://www.linkedin.com/in/nsurdel www.fergusonlab.com

Abstract

Clark and Fischer (C&F) claim that trait attribution has major limitations in explaining human–robot interactions. We argue that the trait attribution approach can explain the three issues posited by C&F. We also argue that the trait attribution approach is parsimonious, as it assumes that the same mechanisms of social cognition apply to human–robot interaction.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Allyn, B. (2022). The Google engineer who sees company's AI as “sentient” thinks a chatbot has a soul. NPR. https://www.npr.org/2022/06/16/1105552435/google-ai-sentientGoogle Scholar
Banks, J., & Bowman, N. D. (2016). Emotion, anthropomorphism, realism, control: Validation of a merged metric for player–avatar interaction (PAX). Computers in Human Behavior, 54, 215223. https://doi.org/10.1016/j.chb.2015.07.030CrossRefGoogle Scholar
Bartneck, C., Verbunt, M., Mubin, O., & Al Mahmud, A. (2007). To kill a mockingbird robot. In Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington, DC (pp. 8187). https://doi.org/10.1145/1228716.1228728Google Scholar
Bigman, Y., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 2134. https://doi.org/10.1016/j.cognition.2018.08.003CrossRefGoogle ScholarPubMed
Bigman, Y., Gray, K., Waytz, A., Arnestad, M., & Wilson, D. (2022). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General. Advance online publication. http://dx.doi.org/10.1037/xge0001250Google ScholarPubMed
Brewer, M. B. (1988). A dual process model of impression formation. In Scrull, T. K. & Wyer, R. S. (Eds.), Advances in social cognition (Vol. 1, pp. 136). Erlbaum..Google Scholar
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809825. https://doi.org/10.1177/0022243719851788CrossRefGoogle Scholar
Ferguson, M. J., & Bargh, J. A. (2004). Liking is for doing: The effects of goal pursuit on automatic evaluation. Journal of Personality and Social Psychology, 87(5), 557572. https://doi.org/10.1037/0022-3514.87.5.557CrossRefGoogle ScholarPubMed
Fiske, S. T., & Neuberg, S. L. (1990). A continuum of impression formation, from category-based to individuating processes: Influences of information and motivation on attention and interpretation. In Advances in experimental social psychology (Vol. 23, pp. 174). Elsevier. https://doi.org/10.1016/S0065-2601(08)60317-2Google Scholar
Freeman, J. B., Stolier, R. M., & Brooks, J. A. (2020). Dynamic interactive theory as a domain-general account of social perception. In Advances in experimental social psychology (Vol. 61, pp. 237287). Elsevier. https://doi.org/10.1016/bs.aesp.2019.09.005Google Scholar
Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252264. https://doi.org/10.1207/s15327957pspr1003_4CrossRefGoogle ScholarPubMed
Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65(1), 399423. https://doi.org/10.1146/annurev-psych-010213-115045CrossRefGoogle ScholarPubMed
Krumhuber, E. G., Swiderska, A., Tsankova, E., Kamble, S. V., & Kappas, A. (2015). Real or artificial? Intergroup biases in mind perception in a cross-cultural perspective. PLoS ONE, 10(9), e0137840. https://doi.org/10.1371/journal.pone.0137840CrossRefGoogle ScholarPubMed
Kunda, Z., Davies, P. G., Adams, B. D., & Spencer, S. J. (2002). The dynamic time course of stereotype activation: Activation, dissipation, and resurrection. Journal of Personality and Social Psychology, 82(3), 283299.CrossRefGoogle ScholarPubMed
Landy, J. F., Piazza, J., & Goodwin, G. P. (2016). When it's bad to be friendly and smart: The desirability of sociability and competence depends on morality. Personality and Social Psychology Bulletin, 42(9), 12721290. https://doi.org/10.1177/0146167216655984CrossRefGoogle ScholarPubMed
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629650..CrossRefGoogle Scholar
McConnell, A. R., Lloyd, E. P., & Buchanan, T. M.. (2017). Animals as friends: Social psychological implications of human-pet relationship. In M. Hojjat & A. Moyer (Eds.), The psychology of friendship (pp. 157174). Oxford University Press.Google Scholar
Melnikoff, D. E., & Bailey, A. H. (2018). Preferences for moral vs. immoral traits in others are conditional. Proceedings of the National Academy of Sciences, 115(4), E592E600..CrossRefGoogle ScholarPubMed
Oliver, M. (2020). Inside the life of people married to robots. Buzzworthy. https://www.buzzworthy.com/meet-men-married-robots/Google Scholar
Smith, J. M., Pasek, M. H., Vishkin, A., Johnson, K. A., Shackleford, C., & Ginges, J. (2022). Thinking about God discourages dehumanization of religious outgroups. Journal of Experimental Psychology: General, 151(10), 25862603. https://doi.org/10.1037/xge0001206CrossRefGoogle ScholarPubMed
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human?: The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219232. https://doi.org/10.1177/1745691610369336CrossRefGoogle ScholarPubMed
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113117. https://doi.org/10.1016/j.jesp.2014.01.005CrossRefGoogle Scholar