Skip to main content Accessibility help
×
Home
Hostname: page-component-99c86f546-8r8mm Total loading time: 0.501 Render date: 2021-12-02T08:19:22.262Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }

11 - Facial Actions as Social Signals

from Part II - Machine Analysis of Social Signals

Published online by Cambridge University Press:  13 July 2017

Michel Valstar
Affiliation:
University of Nottingham
Stefanos Zafeiriou
Affiliation:
Imperial College London
Maja Pantic
Affiliation:
Imperial College London
Judee K. Burgoon
Affiliation:
University of Arizona
Nadia Magnenat-Thalmann
Affiliation:
Université de Genève
Maja Pantic
Affiliation:
Imperial College London
Alessandro Vinciarelli
Affiliation:
University of Glasgow
Get access

Summary

According to a recent survey on social signal processing (Vinciarelli, Pantic, & Bourlard, 2009), next-generation computing needs to implement the essence of social intelligence including the ability to recognize human social signals and social behaviors, such as turn taking, politeness, and disagreement, in order to become more effective and more efficient. Social signals and social behaviors are the expression of one's attitude towards social situation and interplay, and they are manifested through a multiplicity of nonverbal behavioral cues, including facial expressions, body postures and gestures, and vocal outbursts like laughter. Of the many social signals, only face, eye, and posture cues are capable of informing us about all identified social behaviors. During social interaction, it is a social norm that one looks their dyadic partner in the eyes, clearly focusing one's vision on the face. Facial expressions thus make for very powerful social signals. As one of the most comprehensive and objective ways to describe facial expressions, the facial action coding system (FACS) has recently received significant attention. Automating FACS coding would greatly benefit social signal processing, opening up new avenues to understanding how we communicate through facial expressions. In this chapter we provide a comprehensive overview of research into machine analysis of facial actions. We systematically review all components of such systems: pre-processing, feature extraction, and machine coding of facial actions. In addition, the existing FACS-coded facial expression databases are summarized. Finally, challenges that have to be addressed to make automatic facial action analysis applicable in real-life situations are extensively discussed.

Introduction

Scientific work on facial expressions can be traced back to at least 1872 when Charles Darwin published The Expression of the Emotions in Man and Animals (1872). He explored the importance of facial expressions for communication and described variations in facial expressions of emotions. Today, it is widely acknowledged that facial expressions serve as the primary nonverbal social signal for human beings, and are responsible for a large part to regulate interactions with each other (Ekman & Ronsenberg, 2005). They communicate emotions, clarify and emphasize what is being said, and signal comprehension, disagreement, and intentions (Pantic, 2009).

Type
Chapter
Information
Social Signal Processing , pp. 123 - 154
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ahlberg, J. (2001). Candide-3 – an updated parameterised face. Technical report, Linkping University, Sweden.
Ahmed, N., Natarajan, T., & Rao, K. R. (1974). Discrete cosine transform. IEEE Transactions on Computers, 23, 90–93.CrossRefGoogle Scholar
Almaev, T. & Valstar, M. (2013). Local gabor binary patterns from three orthogonal planes for automatic facial expression recognition. In Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), September 2–5, Geneva.CrossRef
Ambadar, Z., Cohn, J. F., & Reed, L. I. (2009). All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of Nonverbal Behavior, 33, 17–34.CrossRefGoogle Scholar
Ambadar, Z., Schooler, J. W., & Cohn, J. F. (2005). Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. Psychological Science, 16(5), 403–410.CrossRefGoogle Scholar
Asthana, A., Cheng, S., Zafeiriou, S., & Pantic, M. (2013). Robust discriminative response map fitting with constrained local models. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 26–28, Portland, OR.CrossRef
Asthana, A., Zafeiriou, S., Cheng, S., & Pantic, M. (2014). Incremental face alignment in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1859–1866).CrossRef
Bartlett, M. S., Hager, J. C., Ekman, P., & Sejnowski, T. J. (1999). Measuring facial expressions by computer image analysis. Psychophysiology, 36(2), 253–263.CrossRefGoogle Scholar
Bartlett, M. S., Littlewort, G., Frank, M., et al. (2006). Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, 1(6), 22–35.CrossRefGoogle Scholar
Bartlett, M. S., Viola, P. A., Sejnowski, T. J., et al. (1996). Classifying facial actions. In D. S., Touretzky, M. C., Mozer, and M. E., Hasselmo (Eds), Advances in Neural Information Processing Systems 8 (pp. 823–829). Cambridge, MA: MIT Press.
Bazzo, J. & Lamar, M. (2004). Recognizing facial actions using Gabor wavelets with neutral face average difference. In Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, May 19, Seoul (pp. 505–510).CrossRef
Bobick, A. F. & Davis, J. W. (2001). The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3), 257– 267.CrossRefGoogle Scholar
Cao, X., Wei, Y., Wen, F., & Sun, J. (2012). Face alignment by explicit shape regression. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 16–21, Providence, RI (pp. 2887–2894).
Chang, C.-C. & Lin, C.-J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3), 27:1–27:27.Google Scholar
Chang, K., Liu, T., & Lai, S. (2009). Learning partially observed hidden conditional random fields for facial expression recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 20–25, Miami, FL (pp. 533–540).CrossRef
Chen, J., Liu, X., Tu, P., & Aragones, A. (2013). Learning person-specific models for facial expressions and action unit recognition. Pattern Recognition Letters, 34(15), 1964–1970.CrossRefGoogle Scholar
Chew, S. W., Lucey, P., Lucey, S., et al. (2011). Person-independent facial expression detection using constrained local models. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, March 21–25, Santa Barbara, CA (pp. 915–920).CrossRef
Chew, S. W., Lucey, P., Saragih, S., Cohn, J. F., & Sridharan, S. (2012). In the pursuit of effective affective computing: The relationship between features and registration. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 42(4), 1006–1016.Google Scholar
Chu, W., Torre, F. D. L., & Cohn, J. F. (2013). Selective transfer machine for personalized facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 23–28, Portland, OR.CrossRef
Cohn, J. F. & Schmidt, K. L. (2004). The timing of facial motion in posed and spontaneous smiles. International Journal of Wavelets, Multiresolution and Information Processing, 2(2), 121–132.CrossRefGoogle Scholar
Cootes, T., Ionita, M., Lindner, C., & Sauer, P. (2012). Robust and accurate shape model fitting using random forest regression voting. In 12th European Conference on Computer Vision, October 7–13, Florence, Italy.CrossRef
Cootes, T. & Taylor, C. (2004). Statistical models of appearance for computer vision. Technical report, University of Manchester.
Cosker, D., Krumhuber, E., & Hilton, A. (2011). A FACS valid 3-D dynamic action unit database with applications to 3-D dynamic morphable facial modeling. In Proceedings of the IEEE International Conference on Computer Vision, November 6–11, Barcelona (pp. 2296–2303).CrossRef
Costa, M., Dinsbach, W., Manstead, A. S. R., & Bitti, P. E. R. (2001). Social presence, embarrassment, and nonverbal behavior. Journal of Nonverbal Behavior, 25(4), 225–240.CrossRefGoogle Scholar
Dantone, M., Gall, J., Fanelli, G., & Gool, L. J. V. (2012). Real-time facial feature detection using conditional regression forests. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 16–21, Providence, RI (pp. 2578–2585).CrossRef
Darwin, C. (1872). The Expression of the Emotions in Man and Animals. London: John Murray.CrossRef
De la Torre, F., Campoy, J., Ambadar, Z., & Cohn, J. F. (2007). Temporal segmentation of facial behavior. In Proceedings of the IEEE International Conference on Computer Vision, October 14–21, Rio de Janeiro (pp. 1–8).
Donato, G., Bartlett, M. S., Hager, J. C., Ekman, P., & Sejnowski, T. J. (1999). Classifying facial actions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10), 974–989.Google Scholar
Dornaika, F. & Davoine, F. (2006). On appearance based face and facial action tracking. IEEE Transactions on Circuits and Systems for Video Technology, 16(9), 1107–1124.CrossRefGoogle Scholar
Douglas-Cowie, E., Cowie, R., Cox, C., Amier, N., & Heylen, D. (2008). The sensitive artificial listener: An induction technique for generating emotionally coloured conversation. In LREC Workshop on Corpora for Research on Emotion and Affect, May 26, 2008, Marrakech, Marokko, pages 1–4.
Ekman, P. (2003). Darwin, deception, and facial expression. Annals of the New York Academy of Sciences, 1000, 205–221.CrossRefGoogle Scholar
Ekman, P. & Friesen, W. V. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press.
Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial Action Coding System. Salt Lake City, UT: Human Face.
Ekman, P. & Ronsenberg, L. E. (2005). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System. Oxford: Oxford University Press.
Fasel, B. & Luettin, J. (2000). Recognition of asymmetric facial action unit activities and intensities. In Proceedings of the 15th International Conference on Pattern Recognition, September 3–7, Barcelona (pp. 1100–1103).CrossRef
Frank, M. G. & Ekman, P. (1997). The ability to detect deceit generalizes across different types of high-stakes lies. Journal of Personality and Social Psychology, 72(6), 1429–1439.CrossRef
Frank, M. G. & Ekman, P. (2004). Appearing truthful generalizes across different deception situations. Journal of Personality and Social Psychology, 86, 486–495.CrossRefGoogle Scholar
Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. Journal of Personality and Social Psychology, 64(1), 83–93.CrossRefGoogle Scholar
Gehrig, T. & Ekenel, H. K. (2011). Facial action unit detection using kernel partial least squares. In Proceedings of the IEEE International Conference Computer Vision Workshops, November 6–13, Barcelona (2092–2099).CrossRef
Gill, D., Garrod, O., Jack, R., & Schyns, P. (2012). From facial gesture to social judgment: A psychophysical approach. Journal of Nonverbal Behavior, 3(6), 395.CrossRefGoogle Scholar
Girard, J. M., Cohn, J. F., Mahoor, M. H., Mavadati, S. M., & Rosenwald, D. P. (2013). Social risk and depression: Evidence from manual and automatic facial expression analysis. In Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, April 22–26, Shanghai.CrossRef
Gonzalez, I., Sahli, H., Enescu, V., & Verhelst, W. (2011). Context-independent facial action unit recognition using shape and Gabor phase information. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, October 9–12, Memphis, TN (pp. 548–557).CrossRef
Hamm, J., Kohler, C. G., Gur, R. C., & Verma, R. (2011). Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders. Journal of Neuroscience Methods, 200(2), 237–256.CrossRefGoogle Scholar
Huang, D., Shan, C., & Ardabilian, M. (2011). Local binary pattern and its application to facial image analysis: A survey. IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews, 41(6), 765–781.CrossRefGoogle Scholar
Jaiswal, S., Almaev, T., & Valstar, M. F. (2013). Guided unsupervised learning of mode specific models for facial point detection in the wild. In Proceedings of the IEEE International Conference on Computer Vision Workshops, December 1–8, Sydney (pp. 370–377).CrossRef
Jeni, L. A., Girard, J. M., Cohn, J., & Torres, F. D. L. (2013). Continuous AU intensity estimation using localized, sparse facial feature space. In Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, April 22–26, Shanghai.CrossRef
Jiang, B., Valstar, M. F., Martinez, B., & Pantic, M. (2014). A dynamic appearance descriptor approach to facial actions temporal modelling. IEEE Transactions on Cybernetics, 44(2), 161– 174.CrossRef
Jiang, B., Valstar, M. F., & Pantic, M. (2011). Action unit detection using sparse appearance descriptors in space-time video volumes. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, March 21–25, Santa Barbara, CA (pp. 314–321).CrossRef
Kaltwang, S., Rudovic, O., & Pantic, M. (2012). Continuous pain intensity estimation from facial expressions. In Proceedings of the 8th International Symposium on Visual Computing, July 16–18, Rethymnon, Crete (pp. 368–377).CrossRef
Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. In Proceedings of the 4th International Conference on Automatic Face and Gesture Recognition, March 30, Grenoble, France (pp. 46–53).CrossRef
Kapoor, A., Qi, Y., & Picard, R. W. (2003). Fully automatic upper facial action recognition. In Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures, October 17, Nice, France (pp. 195–202).CrossRef
Khademi, M., Manzuri-Shalmani, M. T., Kiapour, M. H., & Kiaei, A. A. (2010). Recognizing combinations of facial action units with different intensity using a mixture of hidden Markov models and neural network. In Proceedings of the 9th International Conference on Multiple Classifier Systems, April 7–9, Cairo(pp. 304–313).CrossRef
Khan, M. H., Valstar, M. F., & Pridmore, T. P. (2013). A multiple motion model tracker handling occlusion and rapid motion variation. In Proceedings of the 5th UK Computer Vision Student Workshop British Machine Vision Conference, September 9–13, Bristol.
Koelstra, S., Pantic, M., & Patras, I. (2010). A dynamic texture based approach to recognition of facial actions and their temporal models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(11), 1940–1954.CrossRefGoogle Scholar
Kotsia, I., Zafeiriou, S., & Pitas, I. (2008). Texture and shape information fusion for facial expression and facial action unit recognition. Pattern Recognition, 41(3), 833–851.CrossRefGoogle Scholar
Li, Y., Chen, J., Zhao, Y., & Ji, Q. (2013). Data-free prior model for facial action unit recognition. IEEE Transactions on Affective Computing, 4(2), 127–141.CrossRef
Lien, J. J., Kanade, T., Cohn, J. F., & Li, C. (1998). Automated facial expression recognition based on FACS action units. In Proceedings of 3rd IEEE International Conference on Automatic Face and Gesture Recognition, April 14–16, Nara, Japan (pp. 390–395).CrossRef
Lien, J. J., Kanade, T., Cohn, J. F., & Li, C. (2000). Detection, tracking, and classification of action units in facial expression. Robotics and Autonomous Systems, 31, 131–146.CrossRefGoogle Scholar
Littlewort, G. C., Bartlett, M. S., & Lee, K. (2009). Automatic coding of facial expressions displayed during posed and genuine pain. Image and Vision Computing, 27, 1797–1803.CrossRefGoogle Scholar
Littlewort, G. C., Whitehill, J.,Wu, T., et al. (2011). The computer expression recognition toolbox (CERT). In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, March 21–25, Piscataway, NJ (pp. 298–305).CrossRef
Liwicki, S., Tzimiropoulos, G., Zafeiriou, S., & Pantic, M. (2012). Efficient online subspace learning with an indefinite kernel for visual tracking and recognition. IEEE Transactions on Neural Networks and Learning Systems, 23, 1624–1636.CrossRefGoogle Scholar
Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., & Ambadar, Z. (2010). The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specied expression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop, June 13–18, San Francisco (pp. 94–101).CrossRef
Lucey, P., Cohn, J. F., Matthews, I., et al. (2011). Automatically detecting pain in video through facial action units. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41(3), 664–674.CrossRefGoogle Scholar
Lucey, P., Cohn, J. F., Prkachin, K. M., Solomon, P. E., & Matthews, I. (2011). Painful data: The UNBC-McMaster shoulder pain expression archive database. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, March 21–25, Santa Barbara, CA (pp. 57–64).CrossRef
Mahoor, M. H., Cadavid, S., Messinger, D. S., & Cohn, J. F. (2009). A framework for automated measurement of the intensity of non-posed facial action units. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 20–25, Miami, FL (pp. 74–80).CrossRef
Mahoor, M. H., Zhou, M., Veon, K. L., Mavadati, M., & Cohn, J. F. (2011). Facial action unit recognition with sparse representation. In Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, March 21–25, Santa Barbara, CA (pp. 336–342).CrossRef
Martinez, B., Valstar, M. F., Binefa, X., & Pantic, M. (2013). Local evidence aggregation for regression based facial point detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(5), 1149–1163.CrossRefGoogle Scholar
Matthews, I. & Baker, S. (2004). Active appearance models revisited. International Journal of Computer Vision, 60(2), 135–164.CrossRefGoogle Scholar
Mavadati, S. M., Mahoor, M. H., Bartlett, K., & Trinh, P. (2012). Automatic detection of nonposed facial action units. In Proceedings of the 19th International Conference on Image Processing, September 30–October 3, Lake Buena Vista, FL (pp. 1817–1820).
McCallum, A., Freitag, D., & Pereira, F. C. N. (2000). Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the 17th International Conference on Machine Learning, June 29–July 2, Stanford University, CA (pp. 591–598).
McDuff, D., El Kaliouby, R., Senechal, T., et al. (2013). Affectiva-mit facial expression dataset (AM-FED): Naturalistic and spontaneous facial expressions collected “in-the-wild.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 23–28, Portland, OR (pp. 881–888).CrossRef
McKeown, G., Valstar, M. F., Cowie, R., Pantic, M., & Schroder, M. (2012). The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing, 3, 5–17.CrossRefGoogle Scholar
McLellan, T., Johnston, L., Dalrymple-Alford, J., & Porter, R. (2010). Sensitivity to genuine versus posed emotion specified in facial displays. Cognition and Emotion, 24, 1277–1292.CrossRefGoogle Scholar
Milborrow, S. & Nicolls, F. (2008). Locating facial features with an extended active shape model. In Proceedings of the 10th European Conference on Computer Vision, October 12–18, Marseille, France (pp. 504–513).CrossRef
Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distribution. Pattern Recognition, 29(1), 51–59.CrossRefGoogle Scholar
Ojala, T., Pietikäinen, M., & Maenpaa, T. (2002).Multiresolution grey-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971–987.CrossRefGoogle Scholar
Ojansivu, V. & Heikkilä, J. (2008). Blur insensitive texture classification using local phase quantization. In 3rd International Conference on Image and Signal Processing, July 1–3, Cherbourg- Octeville, France (pp. 236–243).CrossRef
Orozco, J., Martinez, B., & Pantic, M. (2013). Empirical analysis of cascade deformable models for multi-view face detection. In IEEE International Conference on Image Processing, September 15–18, Melbourne, Australia (pp. 1–5).
Pan, S. J. & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.CrossRefGoogle Scholar
Pantic, M. (2009). Machine analysis of facial behaviour: Naturalistic and dynamic behaviour. Philosophical Transactions of The Royal Society B: Biological sciences, 365(1535), 3505– 3513.Google Scholar
Pantic, M. & Bartlett, M. S. (2007). Machine analysis of facial expressions. In K., Delac & M., Grgic (Eds), Face Recognition (pp. 377–416). InTech.CrossRef
Pantic, M. & Patras, I. (2004). Temporal modeling of facial actions from face profile image sequences. In Proceedings of the IEEE International Conference Multimedia and Expo, June 27–30, Taipei, Taiwan (pp. 49–52).CrossRef
Pantic, M. & Patras, I. (2005). Detecting facial actions and their temporal segments in nearly frontal-view face image sequences. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, October 12, Waikoloa, HI (pp. 3358–3363).CrossRef
Pantic, M. & Patras, I. (2006). Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 36, 433–449.CrossRefGoogle Scholar
Pantic, M. & Rothkrantz, J. (2000). Automatic analysis or facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1424–1445.CrossRefGoogle Scholar
Pantic, M., Rothkrantz, L., & Koppelaar, H. (1998). Automation of non-verbal communication of facial expressions. In Proceedings of the European Conference on Multimedia, January 5–7, Leicester, UK (pp. 86–93).
Pantic, M., Valstar, M. F., Rademaker, R., & Maat, L. (2005). Web-based database for facial expression analysis. In Proceedings of the IEEE International Conference on Multimedia and Expo, July 6, Amsterdam (pp. 317–321).CrossRef
Papageorgiou, C. P., Oren, M., & Poggio, T. (1998). A general framework for object detection. In Proceedings of the IEEE International Conference on Computer Vision, January 7, Bombay, India (pp. 555–562).
Pfister, T., Li, X., Zhao, G., & Pietikäinen, M. (2011). Recognising spontaneous facial microexpressions. In Proceedings of the IEEE International Conference on Computer Vision, November 6–13, Barcelona (pp. 1449–1456).
Ross, D. A., Lim, J., Lin, R.-S., & Yang, M.-H. (2008). Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1–3), 125–141.CrossRefGoogle Scholar
Rudovic, O., Pavlovic, V., & Pantic, M. (2012). Kernel conditional ordinal random fields for temporal segmentation of facial action units. In Proceedings of 12th European Conference on Computer Vision Workshop, October 7–13, Florence, Italy.CrossRef
Sánchez-Lozano, E., De la Torre, F., & González-Jiménez, D. (2012, October). Continuous regression for non-rigid image alignment. In European Conference on Computer Vision (pp. 250– 263). Springer Berlin Heidelberg.
Sánchez-Lozano, E., Martinez, B., Tzimiropoulos, G., & Valstar, M. (2016, October). Cascaded continuous regression for real-time incremental face tracking. In European Conference on Computer Vision (pp. 645–661). Springer International Publishing.
Sandbach, G., Zafeiriou, S., Pantic, M., & Yin, L. (2012). Static and dynamic 3-D facial expression recognition: A comprehensive survey. Image and Vision Computing, 30(10), 683–697.CrossRefGoogle Scholar
Saragih, J. M., Lucey, S., & Cohn, J. F. (2011). Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision, 91(2), 200–215.CrossRefGoogle Scholar
Savran, A., Alyüz, N., Dibeklioğlu, H., et al. (2008). Bosphorus database for 3-D face analysis. In COST Workshop on Biometrics and Identity Management, May 7–9, Roskilde, Denmark (pp. 47–56).CrossRef
Savran, A., Sankur, B., & Bilge, M. T. (2012a). Comparative evaluation of 3-D versus 2-D modality for automatic detection of facial action units. Pattern Recognition, 45(2), 767–782.Google Scholar
Savran, A., Sankur, B., & Bilge, M. T. (2012b). Regression-based intensity estimation of facial action units. Image and Vision Computing, 30(10), 774–784.Google Scholar
Scherer, K. & Ekman, P. (1982). Handbook of Methods in Nonverbal Behavior Research. Cambridge: Cambridge University Press.
Senechal, T., Rapp, V., Salam, H., et al. (2011). Combining AAM coefficients with LGBP histograms in the multi-kernel SVM framework to detect facial action units. In IEEE International Conference on Automatic Face and Gesture Recognition Workshop, March 21– 25, Santa Barbara, CA (pp. 860–865).CrossRef
Senechal, T., Rapp, V., Salam, H., et al. (2012). Facial action recognition combining heterogeneous features via multi-kernel learning. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 42(4), 993–1005.CrossRefGoogle Scholar
Shan, C., Gong, S., & McOwan, P. (2008). Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing, 27(6), 803–816.CrossRefGoogle Scholar
Simon, T., Nguyen, M. H., Torre, F. D. L., & Cohn, J. (2010). Action unit detection with segmentbased SVMs. In IEEE Conference on Computer Vision and Pattern Recognition, June 13–18, San Francisco (pp. 2737–2744).
Smith, R. S. & Windeatt, T. (2011). Facial action unit recognition using filtered local binary pattern features with bootstrapped and weighted ECOC classifiers. Ensembles in Machine Learning Applications, 373, 1–20.CrossRefGoogle Scholar
Stratou, G., Ghosh, A., Debevec, P., & Morency, L.-P. (2011). Effect of illumination on automatic expression recognition: A novel 3-D relightable facial database. In IEEE International Conference on Automatic Face and Gesture Recognition, March 21–25, Santa Barbara, CA (pp. 611–618).CrossRef
Tax, D. M. J., Hendriks, E., Valstar, M. F., & Pantic, M. (2010). The detection of concept frames using clustering multi-instance learning. In Proceedings of the IEEE International Conference on Pattern Recognition, August 23–26, Istanbul, Turkey (pp. 2917–2920).CrossRef
Tian, Y., Kanade, T., & Cohn, J. (2001). Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 97–115.CrossRefGoogle Scholar
Tian, Y., Kanade, T., & Cohn, J. F. (2002). Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition, May 21, Washington, DC (pp. 229–234).CrossRef
Tong, Y., Chen, J., & Ji, Q. (2010). A unified probabilistic framework for spontaneous facial action modeling and understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2), 258–273.Google Scholar
Tong, Y., Liao, W., & Ji, Q. (2007). Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), 1683–1699.CrossRefGoogle Scholar
Tsalakanidou, F. & Malassiotis, S. (2010). Real-time 2-D+3-D facial action and expression recognition. Pattern Recognition, 43(5), 1763–1775.CrossRefGoogle Scholar
Tsochantaridis, I., Joachims, T., Hofmann, T., & Altun, Y. (2005). Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6, 1453–1484.Google Scholar
Valstar, M. F., Gunes, H., & Pantic, M. (2007). How to distinguish posed from spontaneous smiles using geometric features. In Proceedings of the 9th International Conference on Multimodal Interfaces, November 12–15, Nagoya, Japan (pp. 38–45).
Valstar, M. F., Jiang, B., Mehu, M., Pantic, M., & Scherer, K. (2011). The first facial expression recognition and analysis challenge. In IEEE International Conference on Automatic Face and Gesture Recognition Workshop, March 21–25, Santa Barbara, CA.CrossRef
Valstar, M. F., Martinez, B., Binefa, X., & Pantic, M. (2010). Facial point detection using boosted regression and graph models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 13–18, San Francisco (pp. 2729–2736).CrossRef
Valstar, M. F., Mehu, M., Jiang, B., Pantic, M., & Scherer, K. (2012). Meta-analyis of the first facial expression recognition challenge. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 42(4), 966–979.CrossRefGoogle Scholar
Valstar, M. F. & Pantic, M. (2010). Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In Proceedings of the International Conference Language Resources and Evaluation, Workshop on Emotion, May 17–23, Valetta, Malta (pp. 65–70).
Valstar, M. F. & Pantic, M. (2012). Fully automatic recognition of the temporal phases of facial actions. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 1(99), 28– 43.CrossRefGoogle Scholar
Valstar, M. F., Pantic, M., Ambadar, Z., & Cohn, J. F. (2006). Spontaneous vs. posed facial behavior: Automatic analysis of brow actions. In Proceedings of the International Conference on Multimodal Interfaces, November 2–4, Banff, Canada (pp. 162–170).
Valstar, M. F., Pantic, M., & Patras, I. (2004). Motion history for facial action detection in video. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, October 10–13, The Hague, Netherlands (pp. 635–640).CrossRef
Valstar, M. F., Patras, I., & Pantic, M. (2005). Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, September 21–23, San Diego, CA (pp. 76–84).CrossRef
Van der Maaten, L. & Hendriks, E. (2012). Action unit classification using active appearance models and conditional random field. Cognitive Processing, 13, 507–518.CrossRefGoogle Scholar
Vinciarelli, A., Pantic, M., & Bourlard, H. (2009). Social signal processing: Survey of an emerging domain. Image and Vision Computing, 27(12), 1743–1759.CrossRefGoogle Scholar
Viola, P. & Jones, M. (2003). Fast multi-view face detection. Technical report MERLTR2003–96, Mitsubishi Electric Research Laboratory.
Viola, P. & Jones, M. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.CrossRefGoogle Scholar
Whitehill, J. & Omlin, C. W. (2006). Haar features for FACS AU recognition. In Proceedings of the 7th IEEE International Conference on Automatic Face and Gesture Recognition, April 10–12, Southampton, UK.CrossRef
Williams, A. C. (2002). Facial expression of pain: An evolutionary account. Behavioral and Brain Sciences, 25(4), 439–488.CrossRefGoogle Scholar
Wu, T., Butko, N. J., Ruvolo, P., et al. (2012). Multilayer architectures of facial action unit recognition. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 42(4), 1027– 1038.CrossRefGoogle Scholar
Xiong, X. & De la Torre, F. (2013). Supervised descent method and its applications to face alignment. In IEEE Conference on Computer Vision and Pattern Recognition, June 23–28, Portland, OR.CrossRef
Yang, P., Liu, Q., & Metaxasa, D. N. (2009). Boosting encoded dynamic features for facial expression recognition. Pattern Recognition Letters, 30(2), 132–139.CrossRefGoogle Scholar
Yang, P., Liu, Q., & Metaxasa, D. N. (2011). Dynamic soft encoded patterns for facial event analysis. Computer Vision, and Image Understanding, 115(3), 456–465.CrossRefGoogle Scholar
Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S. (2009). A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1), 39–58.CrossRefGoogle Scholar
Zhang, L., Tong, Y., & Ji, Q. (2008). Active image labeling and its application to facial action labeling. In European Conference on Computer Vision, October 12–18, Marseille, France (pp. 706–719).CrossRef
Zhang, L. & Van derMaaten, L. (2013). Structure preserving object tracking. In IEEE Conference on Computer Vision and Pattern Recognition, June 23–28, Portland, OR.CrossRef
Zhang, X., Yin, L., Cohn, J. F., et al. (2013). A high resolution spontaneous 3-D dynamic facial expression database. In IEEE International Conference on Automatic Face and Gesture Recognition, April 22–26, Shanghai (pp. 22–26).CrossRef
Zhang, Z., Lyons, M., Schuster, M., & Akamatsu, S. (1998). Comparison between geometrybased and Gabor wavelets-based facial expression recognition using multi-layer perceptron. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, April 14–16, Nara, Japan (pp. 454–459).
Zhao, G. Y. & Pietikäinen, M. (2007). Dynamic texture recognition using local binary pattern with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(6), 915–928.CrossRefGoogle Scholar
Zhou, F., De la Torre, F., & Cohn, J. F. (2010). Unsupervised discovery of facial events. In IEEE Conference on Computer Vision and Pattern Recognition, June 13–18, San Francisco.CrossRef
Zhu, X. & Ramanan, D. (2012). Face detection pose estimation, and landmark localization in the wild. In IEEE Conference on Computer Vision and Pattern Recognition, June 16–21, Providence, RI (pp. 2879–2886).
Zhu, Y., De la Torre, F., Cohn, J. F., & Zhang, Y. (2011). Dynamic cascades with bidirectional bootstrapping for action unit detection in spontaneous facial behavior. IEEE Transactions on Affective Computing, 2(2), 79–91.CrossRefGoogle Scholar
3
Cited by

Send book to Kindle

To send this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.

Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Send book to Dropbox

To send content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox.

Available formats
×

Send book to Google Drive

To send content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive.

Available formats
×