Skip to main content Accessibility help
×
Home
Hostname: page-component-684899dbb8-662rr Total loading time: 0.426 Render date: 2022-05-17T01:53:43.871Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true }

16 - Automatic Analysis of Social Emotions

from Part II - Machine Analysis of Social Signals

Published online by Cambridge University Press:  13 July 2017

Hatice Gunes
Affiliation:
University of Cambridge
Björn Schüller
Affiliation:
Imperial College London and Technical University Munich
Judee K. Burgoon
Affiliation:
University of Arizona
Nadia Magnenat-Thalmann
Affiliation:
Université de Genève
Maja Pantic
Affiliation:
Imperial College London
Alessandro Vinciarelli
Affiliation:
University of Glasgow
Get access

Summary

Automatic emotion recognition has widely focused on analysing and inferring the expressions of six basic emotions – happiness, sadness, fear, anger, surprise, and disgust. Little attention has been paid to social emotions such as kindness, unfriendliness, jealousy, guilt, arrogance, shame, and understanding the consequent social behaviour. Social context plays an important factor on labeling and recognizing social emotions, which are difficult to recognise out of context.

Social emotions are emotions that have a social component such as rage arising from a perceived offense (Gratch, Mao, & Marsella, 2006), or embarrassment deflecting undue attention from someone else (Keltner & Buswell, 1997). Such emotions are crucial for what we call social intelligence and they appear to arise from social explanations involving judgments of causality as well as intention and free will (Shaver, 1985).

To date, most of the automatic affect analysers in the literature have performed one-sided analysis by looking only at one party irrespective of the other party with which they interact (Gunes & Schuller, 2013). This assumption is unrealistic for automatic analysis of social emotions due to the inherent social aspect and bias that affect the expressiveness of the emotions in a social context or group setting. Therefore, the recent interest in analysing and understanding group expressions (e.g., Dhall & Goecke, 2012) will potentially contribute to the progress in automatic analysis of social emotions.

Recent developments in social media and social websites have opened up new avenues for the employment of user-driven and user-generated emotional and affective tone such as amused, touched, and empathy in social interactions. Accordingly, a number of researchers refer to automatic analysis of social emotions as ‘social affective analysis’ (e.g., social affective text mining) (Bao et al., 2012). Such works have focused on automatic prediction of social emotions from text content by attempting to establish a connection between affective terms and social emotions (Bao et al., 2012).

Type
Chapter
Information
Social Signal Processing , pp. 213 - 224
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abelin, A. & Allwood, J. (2000). Cross linguistic interpretation of emotional prosody. In Proceedings of ISCA Workshop on Speech and Emotion, Belfast, UK.
Bao, S., Xu, S., Zhang, L., et al. (2012). Mining social emotions from affective text. IEEE Transactions on Knowledge and Data Engineering, 24(9), 1658–1670.Google Scholar
Barrett, K. C. & Campos, J. J. (1987). Perspectives on emotional development II: A functionalist approach to emotion. In J. D, Osofsky (Ed.), Handbook of Infant Development (2nd edn, pp. 555–578). New York: Wiley.
Bremner, P., Trigoni, N., Brown, I., et al. (2013). Being there: Humans and robots in public spaces. In Proceedings of International Conference on Social Robotics, Bristol.
Carver, C. S. (2003). Pleasure as a sign you can attend to something else: Placing positive feelings within a general model of affect. Cognition and, 17, 241–261.CrossRefGoogle Scholar
Celiktutan, O. & Gunes, H. (2014). Continuous prediction of perceived traits and social dimensions in space and time. In Proceedings of IEEE International Conference on Image Processing (ICIP), Paris.CrossRef
Chen, M. & Bargh, J. A. (1999). Consequences of automatic evaluation: Immediate behavioral predispositions to approach or avoid the stimulus. Personality and Social Psychology Bulletin, 25, 215–224.Google Scholar
Cohn, J. F., Reed, L. I., Moriyama, T., et al. (2004).Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (pp. 129–135), Seoul.CrossRef
Costa, M., Dinsbach, W., Manstead, A. S. R., & Bitti, P. E. R. (2001). Social presence, embarrassment, and nonverbal behavior. Journal of Nonverbal Behavior, 25(4), 225–240.CrossRefGoogle Scholar
Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Nonverbal Behavior, 28(2), 117–139.CrossRefGoogle Scholar
Cowie, R., Gunes, H., McKeown, G., et al. (2010). The emotional and communicative significance of head nods and shakes in a naturalistic database. In Proceedings of LREC International Workshop on Emotion (pp. 42–46), Valletta Malta.
Dael, N., Mortillaro, M., & Scherer, K. R. (2012). The body action and posture coding system (BAP): Development and reliability. Journal of Nonverbal Behavior, 36(2), 97–121.CrossRefGoogle Scholar
Dhall, A., Asthana, A., Goecke, R., & Gedeon, T. (2011). Emotion recognition using PHOG and LPQ features. In Proceedings of the Workshop on Facial Expression Recognition and Analysis Challenge (FERA) at IEEE International Conference on Automatic Face and Gesture Recognition (pp. 878–883), Santa Barbara, CA.CrossRef
Dhall, A. & Goecke, R. (2012). Group expression intensity estimation in videos via Gaussian processes. In Proceedings of International Conference on Pattern Recognition (pp. 3525–3528), Tsukuba, Japan.
Eyben, F., Weninger, F., Groß, F., & Schuller, B. (2013). Recent developments in openSMILE, the Munich open-source multimedia feature extractor. In Proceedings of the 21st ACM International Conference on Multimedia, MM 2013. Barcelona, Spain.CrossRef
Eyben, F., Wöllmer,, M., Valstar, M., et al. (2011). String-based audiovisual fusion of behavioural events for the assessment of dimensional affect. Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition (pp. 322–329), Santa Barbara, CA.CrossRef
Förster, J. & Strack, F. (1996). Influence of overt head movements on memory for valenced words: A case of conceptual–motor compatibility. Journal of Personality and Social Psychology, 71, 421–430.CrossRefGoogle Scholar
Gallagher, A. & Chen, T. (2009). Understanding images of groups of people. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 256–263), Miami.CrossRef
Gratch, J., Mao, W., & Marsella, S. (2006). Modeling social emotions and social attributions. In R., Sun (Ed.), Cognitive Modeling and Multi-agent Interactions (pp. 219–251). Cambridge: Cambridge University Press.
Gunes, H. & Pantic, M. (2010). Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In Proceedings of International Conference on Intelligent Virtual Agents (pp. 371–377), Philadelphia, PA.
Gunes, H. & Piccardi, M. (2009). Automatic temporal segment detection and affect recognition from face and body display. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 39(1), 64–84.Google Scholar
Gunes, H. & Schuller, B. (2013). Categorical and dimensional affect analysis in continuous input: current trends and future directions. Image & Vision Computing, 31(2), 120–136.CrossRefGoogle Scholar
Gunes, H., Shan, C., Chen, S., & Tian, Y. (2015) Bodily expression for automatic affect recognition. In A., Konar & A., Chakraborty (Eds), Emotion Recognition: A Pattern Analysis Approach (pp. 343–378). Hoboken, NJ: John Wiley & Sons.
Han, J., Shao, L., Xu, D., & Shotton, J. (2013). Enhanced computer vision with Microsoft Kinect Sensor: A review. IEEE Transactions on Cybernetics, 43, 1318–1334.
Hareli, S. and Parkinson, B. (2008).What is social about social emotions? Journal for the Theory of Social Behaviour, 38(2), 131–156.Google Scholar
Hernandez, J., & Hoque, E. (2011). MIT Mood Meter. moodmeter.media.mit.edu.
Hillman, C. H., Rosengren, K. S., & Smith, D. P. (2004). Emotion and motivated behavior: postural adjustments to affective picture viewing. Biological Psychology, 66, 51–62.CrossRefGoogle Scholar
Hoque, M., Morency, L.-P., & Picard, R. W. (2012). Are you friendly or just polite? Analysis of smiles in spontaneous face-to-face interactions. In S., D'Mello, A., Graesser, B., Schuller, & B., Martin (Eds.), Affective Computing and Intelligent Interaction (vol. 6974, pp. 135–144). New York: Springer.
Inderbitzin, M., Väljamäe, A., & Calvo, J. M. B. (2011). Expression of emotional states during locomotion based on canonical parameters. Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition (pp. 809–814), Santa Barbara, CA.CrossRef
Janssen, D., Schöllhorn, W. I., Lubienetzki, J., et al. (2008). Recognition of emotions in gait patterns by means of artificial neural nets. Journal of Nonverbal Behavior, 32, 79–92.CrossRefGoogle Scholar
Joshi, J., Gunes, H., & Goecke, R. (2014). Automatic prediction of perceived traits using visual cues under varied situational context. In Proceedings of 22nd International Conference on Pattern Recognition (ICPR), Stockholm.CrossRef
Kalayci, S., Ekenel, H. K., & Gunes, H. (2014). Automatic analysis of facial attractiveness from video. In Proceedings of IEEE International Conference on Image Processing (ICIP), Paris.
Karg, M., Kühnlenz, K., & Buss, M. (2010). Recognition of Affect Based on Gait Patterns. IEEE Trans. on Systems, Man and Cybernetics Part B, 40, 1050–1061.Google Scholar
Keltner, D. & Buswell, B. N. (1997). Embarrassment: Its distinct form and appeasement functions. Psychological Bulletin, 122, 250–270.CrossRefGoogle Scholar
Kleinsmith, A. & Bianchi-Berthouze, N. (2007). Recognizing affective dimensions from body posture. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (pp. 48–58), Lisbon.CrossRef
Kleinsmith, A. & Bianchi-Berthouze, N. (2012). Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing, 4(1), 15–33.CrossRefGoogle Scholar
Marchi, E., Schuller, B., Batliner, A., et al. (2012). Emotion in the speech of children with autism spectrum conditions: Prosody and everything else. In Proceedings of the 3rd Workshop on Child, Computer and Interaction (WOCCI 2012). Portland, OR.
Meservy, T. O., Jensen, M. L., Kruse, J., et al. (2005). Deception detection through automatic, unobtrusive analysis of nonverbal behavior. IEEE Intelligent Systems, 20(5), 36–43.CrossRefGoogle Scholar
Neviarouskaya, A., Prendinger, H., & Ishizuka, M. (2007). Textual affect sensing for sociable and expressive online communication. In. Paiva, R. Prada, & R., Picard (Eds), Affective Computing and Intelligent Interaction (vol. 4738, pp. 220–231). New York: Springer.
Pantic, M. & Bartlett, M. S. (2007). Machine analysis of facial expressions. In K., Delac, & M., Grgic (Eds), Face Recognition (pp. 377–416). Vienna: I-Tech Education and Publishing.
Parkinson, B., Fischer, A. H., & Manstead, A. S. R. (2005). Emotion in Social Relations: Cultural, Group, and Interpersonal Processes. New York: Psychology Press.
Pfister, T. (2009). Emotion detection from speech. PhD thesis, Cambridge University.
Pittam, J. & Scherer, K. (1993). Vocal expression and communication of emotion. In M., Lewis & J. M., Haviland-Jones (Eds) Handbook of Emotions (pp. 185–197). New York: Guilford Press.
Saragih, J. & Goecke, R. (2009). Learning AAM fitting through simulation. Pattern Recognition, 42(November), 2628–2636.CrossRefGoogle Scholar
Schuller, B., Batliner, A., Steidl, S., & Seppi, D. (2011). Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. Speech Communication, Special Issue on Sensing Emotion and Affect – Facing Realism in Speech Processing, 53(9/10), 1062–1087.CrossRefGoogle Scholar
Schuller, B., Marchi, E., Baron-Cohen, S., et al. (2013). ASC-inclusion: Interactive emotion games for social inclusion of children with autism spectrum conditions. In Proceedings of the 1st International Workshop on Intelligent Digital Games for Empowerment and Inclusion, Chania, Crete.
Schuller, B. & Rigoll, G. (2009). Recognising interest in conversational speech – comparing bag of frames and supra-segmental features. In Proceedings of InterSpeech 2009, 10th Annual Conference of the International Speech Communication Association Pages 1999–2002 of:. Brighton, UK: ISCA.
Schuller, B., Steidl, S., Batliner, A., et al. (2013). The InterSpeech 2013 computational paralinguistics challenge: Social signals, conflict, emotion, autism. In Proceedings InterSpeech 2013, 14th Annual Conference of the International Speech Communication Association (pp. 148– 152). Lyon, France.
Shaver, K. G. (1985). The Attribution of Blame: Causality, Responsibility, and Blameworthiness. New York: Springer.CrossRef
Sobol-Shikler, T. (2007). Analysis of affective expression in speech. PhD thesis, Cambridge University.
Tracy, J. L. & Matsumoto, D. (2008). The spontaneous expression of pride and shame: Evidence for biologically innate nonverbal displays. Proceedings of the National Academy of Sciences of the United States of America, 105(33), 11655–11660.CrossRefGoogle Scholar
Valstar, M. F., Gunes, H., & Pantic, M. (2007). How to distinguish posed from spontaneous smiles using geometric features. In Proceedings of the ACM International Conference on Multimodal Interfaces (pp. 38–45), Nagoya, Japan.CrossRef
Wallbott, H. G. (1998). Bodily expression of emotion. European Journal of Social Psychology, 28, 879–896.3.0.CO;2-W>CrossRefGoogle Scholar
Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S. (2009). A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence, 31, 39–58 CrossRefGoogle Scholar
1
Cited by

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×