Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-mp689 Total loading time: 0 Render date: 2024-04-20T02:06:03.717Z Has data issue: false hasContentIssue false

23 - Social Signal Processing in Social Robotics

from Part III - Machine Synthesis of Social Signals

Published online by Cambridge University Press:  13 July 2017

Maha Salem
Affiliation:
University of Hertfordshire
Kerstin Dautenhahn
Affiliation:
University of Hertfordshire
Judee K. Burgoon
Affiliation:
University of Arizona
Nadia Magnenat-Thalmann
Affiliation:
Université de Genève
Maja Pantic
Affiliation:
Imperial College London
Alessandro Vinciarelli
Affiliation:
University of Glasgow
Get access

Summary

Introduction

In recent years, the roles of robots have become increasingly social, leading to a shift from machines that are designed for traditional human–robot interaction (HRI), such as industrial robots, to machines intended for social HRI. As a result, the wide range of robotics applications today includes service and household robots, museum and reception attendants, toys and entertainment devices, educational robots, route guides, and robots for elderly assistance, therapy, and rehabilitation. In light of this transformation of application domain, many researchers have investigated improved designs and capabilities for robots to engage in meaningful social interactions with humans (Breazeal, 2003).

The term social robots was defined by Fong, Nourbakhsh, and Dautenhahn (2003) to describe “embodied agents that are part of a heterogeneous group: a society of robots or humans. They are able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate with and learn from each other” (p. 144). Other terms that have been used widely are “socially interactive robots” (Fong et al., 2003) with an emphasis on peer-to-peer multimodal interaction and communication between robots and people, and “sociable robots” (Breazeal, 2002) that pro-actively engage with people based on models of social cognition. A discussion of the different concepts of social robots can be found in Dautenhahn (2007). Note that all the above definitions consider social robots in the context of interactions with humans; this is in contrast to approaches on collective and swarm robotics (Kube, 1993; Bonabeau, Dorigo, & Theraulaz, 1999; Kernbach, 2013) which emphasise interactions among large groups of (typically) identical robots that strongly rely on communication mediated by the environment and afforded by the physical embodiment of the robots.

Together with the attempt to name and define this new category of robots, a whole new research area – social robotics – has since emerged. Social robotics research is dedicated to designing, developing, and evaluating robots that can engage in social environments in a way that is appealing and familiar to human interaction partners (Salem et al., 2013). However, interaction is often difficult as inexperienced users struggle to understand the robot's internal states, intentions, actions, and expectations. To facilitate successful interaction, social robots should therefore provide communicative functionality that is intuitive and, to some extent, natural to humans.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baxter, J. (1970). Interpersonal spacing in natural settings.Sociometry, 33(4), 444–456.Google Scholar
Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm Intelligence: From Natural to Artificial Systems. New York: Oxford University Press.
Breazeal, C. (2002). Designing Sociable Robots. Cambridge, MA: MIT Press.
Breazeal, C. (2003). Toward sociable robots.Robotics and Autonomous Systems, 42(3–4), 167– 175.Google Scholar
Cassell, J., McNeill, D., & McCullough, K.-E. (1998). Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information.Pragmatics & Cognition, 6(2), 1–34.Google Scholar
Castellano, G., Leite, I., Pereira, A., et al. (2010). Affect recognition for interactive companions: Challenges and design in real world scenarios.Journal on Multimodal User Interfaces, 3(1–2), 89–98.Google Scholar
Castellano, G., Leite, I., Pereira, A., et al. (2013).Multimodal affect modeling and recognition for empathic robot companions.International Journal of Humanoid Robotics, 10(1).Google Scholar
Chidambaram, V., Chiang, Y.-H., & Mutlu, B. (2012). Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues. In Proceedings of 7th ACM/IEEE International Conference on Human–Robot Interaction (HRI) (pp. 293–300), Boston, MA.
Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence.Journal of Nonverbal Behavior, 28(2), 117–139.Google Scholar
Crane, E. & Gross, M. (2007). Motion capture and emotion: Affect detection in whole body movement. In A, Paiva, R, Prada, & R. W, Picard (Eds), Affective Computing and Intelligent Interaction (pp. 95–101). Berlin: Springer.
Dautenhahn, K. (2007). Socially intelligent robots: Dimensions of human–robot interaction.Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 679–704.Google Scholar
Deutsch, R. D. (1977). Spatial Structurings in Everyday Face-to-face Behavior. Orangeburg, NY: Association for the Study of Man–Environment Relations.
Droeschel, D., Stuckler, J., & Behnke, S. (2011). Learning to interpret pointing gestures with a time-of-flight camera. In Proceedings of the 6th ACM/IEEE International Conference on Human–Robot Interaction (HRI) (pp. 481–488), Lausanne, Switzerland.
Duffy, B. R. (2003). Anthropomorphism and the social robot.Robotics and Autonomous Systems, 42(3–4), 177–190.Google Scholar
Duque, I., Dautenhahn, K., Koay, K. L., Willcock, L., & Christianson, B. (2013). A different approach of using personas in human–robot interaction: Integrating personas as computational models to modify robot companions’ behaviour. In Proceedings of IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 424–429), Gyeongju, South Korea.
El Kaliouby, R. & Robinson, P. (2005). Generalization of a vision-based computational model of mind-reading. In J, Tao, T, Tan, & R, Picard (Eds), Affective Computing and Intelligent Interaction (vol. 3784, pp. 582–589). Berlin: Springer.
Epley, N., Waytz, A., & Cacioppo, J. (2007). On seeing human: A three-factor theory of anthropomorphism.Psychological Review, 114(4), 864–886.Google Scholar
Eyssel, F., Kuchenbrandt, D., Hegel, F., & De Ruiter, L. (2012). Activating elicited agent knowledge: How robot and user features shape the perception of social robots. In Proceedings of IEEE International Symposium on Robot and Human Interactive Communication (pp. 851– 857), Paris.
Fong, T., Nourbakhsh, I. R., & Dautenhahn, K. (2003). A survey of socially interactive robots.Robotics and Autonomous Systems, 42(3–4), 143–166.Google Scholar
François, D., Dautenhahn, K., & Polani, D. (2009). Using real-time recognition of human–robot interaction styles for creating adaptive robot behaviour in robot-assisted play. In Proceedings of 2nd IEEE Symposium on Artificial Life (pp. 45–52), Nashville, TN.
Goetz, J., Kiesler, S., & Powers, A. (2003). Matching robot appearance and behavior to tasks to improve human–robot cooperation. In Proceedings of the 12th IEEE International Symposium on Robot and Human Interactive Communication (pp. 55–60), Millbrae, CA.
Goldin-Meadow, S. (1999). The role of gesture in communication and thinking.Trends in Cognitive Science, 3, 419–429.Google Scholar
Goodrich, M. A. & Schultz, A. C. (2007). Human–robot interaction: A survey.Foundation and Trends in Human–Computer Interaction, 1(3), 203–275.Google Scholar
Hall, E. (1995). Handbook for proxemic research.Anthropology News, 36(2), 40.Google Scholar
Honda Motor Co. Ltd (2000). The Honda Humanoid Robot Asimo, year 2000 model. http://world.honda.com/ASIMO/technology/2000/.
Hostetter, A. B. (2011). When do gestures communicate? A meta-analysis.Psychological Bulletin, 137(2), 297–315.Google Scholar
Kanda, T., Ishiguro, H., Ono, T., Imai, M., & Nakatsu, R. (2002). Development and evaluation of an interactive humanoid robot “Robovie.” In Proceedings IEEE International Conference on Robotics and Automation (pp. 1848–1855), Washington, DC.
Kernbach, S. (2013). Handbook of Collective Robotics – Fundamentals and Challenges. Boca Raton, FL: Pan Stanford.
Kim, H., Kwak, S., & Kim, M. (2008). Personality design of sociable robots by control of gesture design factors. In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication (pp. 494–499), Munich.
Koay, K. L., Lakatos, G., Syrdal, D. S., et al. (2013). Hey! There is someone at your door. A hearing robot using visual communication signals of hearing dogs to communicate intent. In Proceeding of the 2013 IEEE Symposium on Artificial Life (pp. 90–97).
Kolb, D. (1984). Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice Hall.
Kube, C. R. (1993). Collective robotics: From social insects to robots.Adaptive Behavior, 2(2), 189–218.Google Scholar
Lang, C., Wachsmuth, S., Hanheide, M., & Wersing, H. (2012). Facial communicative signals.International Journal of Social Robotics, 4(3), 249–262.Google Scholar
Lee, J., Chao, C., Bobick, A., & Thomaz, A. (2012). Multi-cue contingency detection.International Journal of Social Robotics, 4(2), 147–161.Google Scholar
Lee, S.-I., Kiesler, S., Lau, Y.-m., & Chiu, C.-Y. (2005). Human mental models of humanoid robots. In Proceedings of 2005 IEEE International Conference on Robotics and Automation (pp. 2767–2772).
Lütkebohle, I., Hegel, F., Schulz, S., et al. (2010). The Bielefeld anthropomorphic robot head “Flobi.” In Proceedings of the IEEE International Conference on Robotics and Automation (pp. 3384–3391), Anchorage, AK.
McNeill, D. (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press.
Mead, R., Atrash, A., & Matari, M. (2013). Automated proxemic feature extraction and behavior recognition: Applications in human–robot interaction.International Journal of Social Robotics, 5(3), 367–378.Google Scholar
Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The icub humanoid robot: An open platform for research in embodied cognition. In Proceedings of the 8th workshop on Performance Metrics for Intelligent Systems (pp. 50–56).
Mori, M. (1970). The uncanny valley (trans., K. F, MacDorman & T, Minato).Energy, 7(4), 33–35.
Mumm, J. & Mutlu, B. (2011). Human–robot proxemics: Physical and psychological distancing in human–robot interaction. In Proceedings of the 6th International Conference on Human–Robot Interaction (pp. 331–338), Lausanne, Switzerland.
Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H. (2012). Conversational gaze mechanisms for humanlike robots.ACM Transactions on Interactive Intelligent Systems (TiiS), 1(2).Google Scholar
Pantic, M. & Patras, I. (2006). Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences.IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 36(2), 433–449.Google Scholar
Pollick, 1F., Paterson, H., Bruderlin, A., & Sanford, A. (2001). Perceiving affect from arm movement.Cognition, 82(2), 51–61.Google Scholar
Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F. (2013). To err is human(-like): Effects of robot gesture on perceived anthropomorphism and likability.International Journal of Social Robotics, 5(3), 313–323.Google Scholar
Suarez, J. & Murphy, R. R. (2012). Hand gesture recognition with depth images: A review. In Proccedings of IEEE International Workshop on Robot and Human Interactive Communication (pp. 411–417), Paris.
Thomaz, A. L., Berlin, M., & Breazeal, C. (2005). An embodied computational model of social referencing. In Proceedings of IEEE International Workshop on Robot and Human Interactive Communication (pp. 591–598).
Vinciarelli, A., Pantic, M., & Bourlard, H. (2008). Social signal processing: Survey of an emerging domain.Image and Vision Computing, 27, 1743–1759.Google Scholar
Walters, M. L., Dautenhahn, K., Te Boekhorst, R., et al. (2009). An empirical framework for human–robot proxemics. Proceedings of New Frontiers in Human–Robot Interaction (pp. 144– 149).
Walters, M. L., Syrdal, D. S., Dautenhahn, K., Te Boekhorst, R., & Koay, K. L. (2008). Avoiding the uncanny valley: Robot appearance, personality and consistency of behavior in an attentionseeking home scenario for a robot companion.Autonomous Robots, 24(2), 159–178.Google Scholar
Yeasin, M., Bullot, B., & Sharma, R. (2006). Recognition of facial expressions and measurement of levels of interest from video.IEEE Transactions on Multimedia, 8(3), 500–508.Google Scholar
Zeng, Z., Pantic, M., Roisman, G., & Huang, T. (2009). A survey of affect recognition methods: Audio, visual, and spontaneous expressions.IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1), 39–58.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×