We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Movement scientists have proposed to ground the relation between prosody and gesture in ‘vocal-entangled gestures’, defined as biomechanical linkages between upper limb movement and the respiratory–vocal system. Focusing on spoken language negation, this article identifies an acoustic profile with which gesture is plausibly entangled, specifically linking the articulatory behaviour of onset consonant lengthening with forelimb gesture preparation and facial deformation. This phenomenon was discovered in a video corpus of accented negative utterances from English-language televised dialogues. Eight target examples were selected and examined using visualization software to analyse the correspondence of gesture phase structures (preparation, stroke, holds) with the negation word’s acoustic signal (duration, pitch and intensity). The results show that as syllable–onset consonant lengthens (voiced alveolar /n/ = 300 ms on average) with pitch and intensity increasing (e.g. ‘NNNNNNEVER’), the speaker’s humerus is rotating with palm pronating/adducing while his or her face is distorting. Different facial distortions, furthermore, were found to be entangled with different post-onset phonetic profiles (e.g. vowel rounding). These findings illustrate whole-bodily dynamics and multiscalarity as key theoretical proposals within ecological and enactive approaches to language. Bringing multimodal and entangled treatments of utterances into conversation has important implications for gesture studies.
Autism is associated with challenges in emotion recognition. Yet, little is known about how emotion recognition develops over time in autistic children. This four-wave longitudinal study followed the development of three emotion-recognition abilities regarding four basic emotions in children with and without autism aged 2.5 to 6 years over three years. Behavioral tasks were used to examine whether children could differentiate facial expressions (emotion differentiation), identify facial expressions with verbal labels (emotion identification), and attribute emotions to emotion-provoking situations (emotion attribution). We confirmed previous findings that autistic children experienced more difficulties in emotion recognition than non-autistic children and the group differences were present already from the preschool age. However, the group differences were observed only when children processed emotional information from facial expressions. When emotional information could be deduced from situational cues, most group differences disappeared. Furthermore, this study provided novel longitudinal evidence that emotion recognition improved with age in autistic children: compared to non-autistic children, autistic children showed similar learning curves in emotion discrimination and emotion attribution, and they showed greater improvements in emotion identification. We suggest that inclusion and respect in an environment free of stereotyping are likely to foster the development of emotion recognition among autistic children.
Drawing on the perspectives of cognitive linguistics and evolutionary biology, this contribution revisits the meaning of the Homeric formula ὑπόδρα ἰδών, literally ‘looking from below’, which is generally acknowledged as an indication of anger in epic poetry. A detailed examination of the phrase suggests that the facial expression it refers to was originally an inclination of the head while maintaining a fixed gaze ahead, resulting in a view from beneath lowered brows. It is argued that this position of the head serves as a functional preparation for a physical conflict, and consequently that the epic phrase ὑπόδρα ἰδών is not merely a metonym for anger but also a signal of the willingness to resort to violence if the conflict is not resolved by other means. This is also borne out by the contexts in which the formula occurs, since in most cases the speeches introduced with a ‘look from below’ are either followed by violent actions or cause their addressee to retract the offence.
Darwin and other pioneering scholars made comparisons between human facial signals and those of non-human primates, suggesting that they share evolutionary history. We now have tools available (the Facial Action Coding System) to make these comparisons anatomically based and standardised, as well as analytical methods to facilitate comparative studies. Here we review the evidence establishing a shared anatomical basis between the facial behaviour of human and non-human primate species, concluding which signals are probably related, and which are not. We then review the evidence for shared function and discuss the implications for understanding human communication. Where differences between humans and other species exist, we explore possible explanations and future directions for enquiry.
Autism spectrum disorder (ASD) encopasses disorders with incompletely known etiology. Facial expression of people with ASD does not often reflect their emotions adequately or are strongly limited. In addition, they have a problem with joint attention. The symptoms of autism spectrum disorder are very various and have different severity that can change over time.There are still no objective methods for estimating these symptoms, which creates a huge diagnostic and clinical problem. Motion Capture technology makes the possibility of this objective assessment of the severity of initial symptoms, their change over time, as well as specificity for people with ASD.
Objectives
To assess the application of Motion Capture technology in a clinical evaluation and a therapy for people with ASD.
Methods
We analyzed literature related to the topic available at medical bases: PubMed, ResearchGate and Google Scholar. The articles which were included had been published after 2000 and have an English or Polish abstract.
Results
We included 2 trials involving 81 participants (children and adolescents): 1 trial reported on quantifying the social symptoms of autism and 1 trial on differences of facial expressions in people with and without ASD.
Conclusions
This capture of motions and the analysis of specific movements of people with autism spectrum disorder might be very useful in clinical practice, scientific research, therapy and also in creation of functioning systems at homes, schools and kindergartens. Thanks to this, people with ASD will be able to function better in society.
Aberrant emotional reactivity is a putative endophenotype for bipolar disorder (BD), but the findings of behavioral studies are often negative due to suboptimal sensitivity of the employed paradigms. This study aimed to investigate whether visual gaze patterns and facial displays of emotion during emotional film clips can reveal subtle behavioral abnormalities in remitted BD patients.
Methods.
Thirty-eight BD patients in full or partial remission and 40 healthy controls viewed 7 emotional film clips. These included happy, sad, and neutral scenarios and scenarios involving winning, risk-taking, and thrill-seeking behavior of relevance to the BD phenotype. Eye gaze and facial expressions were recorded during the film clips, and participants rated their emotional reactions after each clip.
Results.
BD patients showed a negative bias in both facial displays of emotion and self-rated emotional responses. Specifically, patients exhibited more fearful facial expressions during all film clips. This was accompanied by less positive self-rated emotions during the winning and happy film clips, and more negative emotions during the risk-taking/thrill-related film clips.
Conclusions.
These findings suggest that BD is associated with trait-related abnormalities in subtle behavioral displays of emotion processing. Future studies comparing patients with BD and unipolar depression are warranted to clarify whether these differences are specific to BD. If so, assessments of visual gaze and facial displays of emotion during emotional film clips may have the potential to be implemented in clinical assessments to aid diagnostic accuracy.
There is demand for new, effective and scalable treatments for depression, and development of new forms of cognitive bias modification (CBM) of negative emotional processing biases has been suggested as possible interventions to meet this need.
Methods
We report two double blind RCTs, in which volunteers with high levels of depressive symptoms (Beck Depression Inventory ii (BDI-ii) > 14) completed a brief course of emotion recognition training (a novel form of CBM using faces) or sham training. In Study 1 (N = 36), participants completed a post-training emotion recognition task whilst undergoing functional magnetic resonance imaging to investigate neural correlates of CBM. In Study 2 (N = 190), measures of mood were assessed post-training, and at 2-week and 6-week follow-up.
Results
In both studies, CBM resulted in an initial change in emotion recognition bias, which (in Study 2) persisted for 6 weeks after the end of training. In Study 1, CBM resulted in increases neural activation to happy faces, with this effect driven by an increase in neural activity in the medial prefrontal cortex and bilateral amygdala. In Study 2, CBM did not lead to a reduction in depressive symptoms on the BDI-ii, or on related measures of mood, motivation and persistence, or depressive interpretation bias at either 2 or 6-week follow-ups.
Conclusions
CBM of emotion recognition has effects on neural activity that are similar in some respects to those induced by Selective Serotonin Reuptake Inhibitors (SSRI) administration (Study 1), but we find no evidence that this had any later effect on self-reported mood in an analogue sample of non-clinical volunteers with low mood (Study 2).
How do faces convey emotional information? Some psychologists believe that private emotions automatically surface as facial expressions. Others argue that the main purpose of facial activity is to communicate social motives and influence other people’s behaviour. This chapter evaluates these competing accounts using evidence from judgement and production studies. Judgement studies ask participants to decide what emotion is being expressed in photos or videos of facial expressions. Production studies assess facial activity in emotional situations more directly. Findings obtained using these two methods do not always converge but neither kind of study provides direct support for universal or consistent emotion-expression connections. A range of different factors seems to influence facial activity, only some of which relate to emotion. However, some forms of emotional influence clearly do depend on facial communication and calibration.
continues the explanation of the origin and evolution of emoji by introducing the importance of non-verbal communication for face-to-face conversation and the issues that can arise from the absence of this online. The chapter looks at research into universal facial expressions, from Duchenne to Eckman, and how this provides a context for how emoji are used and interpreted. It also explains how the language of emoji faces has its origins in conventions developed in manga, which means that there’s also a learned element to the way they express particular concepts and sentiments.
As early as infancy, caregivers’ facial expressions shape children's behaviors, help them regulate their emotions, and encourage or dissuade their interpersonal agency. In childhood and adolescence, proficiencies in producing and decoding facial expressions promote social competence, whereas deficiencies characterize several forms of psychopathology. To date, however, studying facial expressions has been hampered by the labor-intensive, time-consuming nature of human coding. We describe a partial solution: automated facial expression coding (AFEC), which combines computer vision and machine learning to code facial expressions in real time. Although AFEC cannot capture the full complexity of human emotion, it codes positive affect, negative affect, and arousal—core Research Domain Criteria constructs—as accurately as humans, and it characterizes emotion dysregulation with greater specificity than other objective measures such as autonomic responding. We provide an example in which we use AFEC to evaluate emotion dynamics in mother–daughter dyads engaged in conflict. Among other findings, AFEC (a) shows convergent validity with a validated human coding scheme, (b) distinguishes among risk groups, and (c) detects developmental increases in positive dyadic affect correspondence as teen daughters age. Although more research is needed to realize the full potential of AFEC, findings demonstrate its current utility in research on emotion dysregulation.
Objectives: Bipolar disorder (BD) is associated with impairments in facial emotion and emotional prosody perception during both mood episodes and periods of remission. To expand on previous research, the current study investigated cross-modal emotion perception, that is, matching of facial emotion and emotional prosody in remitted BD patients. Methods: Fifty-nine outpatients with BD and 45 healthy volunteers were included into a cross-sectional study. Cross-modal emotion perception was investigated by using two subtests out of the Comprehensive Affective Testing System (CATS). Results: Compared to control subjects patients were impaired in matching sad (p < .001) and angry emotional prosody (p = .034) to one of five emotional faces exhibiting the corresponding emotion and significantly more frequently matched sad emotional prosody to happy faces (p < .001) and angry emotional prosody to neutral faces (p = .017). In addition, patients were impaired in matching neutral emotional faces to the emotional prosody of one of three sentences (p = .006) and significantly more often matched neutral faces to sad emotional prosody (p = .014). Conclusions: These findings demonstrate that, even during periods of symptomatic remission, patients suffering from BD are impaired in matching facial emotion and emotional prosody. As this type of emotion processing is relevant in everyday life, our results point to the necessity to provide specific training programs to improve psychosocial outcomes. (JINS, 2019, 25, 336–342)
The facilitating role of the facial expression of surprise in the discrimination of the facial expression of fear was analyzed. The sample consisted of 202 subjects that undertook a forced-choice test in which they had to decide as quickly as possible whether the facial expression displayed on-screen was one of fear, anger or happiness. Variations were made to the prime expression (neutral expression, or one of surprise); the target expression (facial expression of fear, anger or happiness), and the prime duration (50 ms, 150 ms or 250 ms). The results revealed shorter reaction times in the response to the expression of fear when the prime expression was one of surprise, with a prime duration of 50 ms (p = .009) and 150 ms (p = .001), compared to when the prime expression was a neutral one. By contrast, the reaction times were longer in the discrimination of an expression of fear when the prime expression was one of surprise with a prime duration of 250 ms (p < .0001), compared to when the prime expression was a neutral one. This pattern of results was obtained solely in the discrimination of the expression of fear. The discussion focuses on these findings and the possible functional continuity between surprise and fear.
Recognition of facial affect has been studied extensively in adults with and without traumatic brain injury (TBI), mostly by asking examinees to match basic emotion words to isolated faces. This method may not capture affect labelling in everyday life when faces are in context and choices are open-ended. To examine effects of context and response format, we asked 148 undergraduate students to label emotions shown on faces either in isolation or in natural visual scenes. Responses were categorised as representing basic emotions, social emotions, cognitive state terms, or appraisals. We used students’ responses to create a scoring system that was applied prospectively to five men with TBI. In both groups, over 50% of responses were neither basic emotion words nor synonyms, and there was no significant difference in response types between faces alone vs. in scenes. Adults with TBI used labels not seen in students’ responses, talked more overall, and often gave multiple labels for one photo. Results suggest benefits of moving beyond forced-choice tests of faces in isolation to fully characterise affect recognition in adults with and without TBI.
Cognitive and behavioural processes may constitute a risk for onset and persistence of depression. People who become depressed frequently show enduring negative cognitions which predispose them to depression. In addition, interpersonal processes are supposed to contribute to the etiology and maintenance of depression. Depression-prone persons are presumed to display deficient or problematic social behaviours that elicit negative reactions in others, finally resulting in withdrawal by family and friends.
About 60% of human communication is non-verbal. An ethological approach may therefore contribute to reveal behavioural and cognitive vulnerability factors for depression onset or persistence. Various studies support this presumption: High levels of patients' observed behaviour indicating involvement in the interaction between depressives and clinicians at admission are related with persistence of depression.
The importance of including measures of emotion processing, such as tests of facial emotion recognition (FER), as part of a comprehensive neuropsychological assessment is being increasingly recognized. In clinical settings, FER tests need to be sensitive, short, and easy to administer, given the limited time available and patient limitations. Current tests, however, commonly use stimuli that either display prototypical emotions, bearing the risk of ceiling effects and unequal task difficulty, or are cognitively too demanding and time-consuming. To overcome these limitations in FER testing in patient populations, we aimed to define FER threshold levels for the six basic emotions in healthy individuals. Forty-nine healthy individuals between 52 and 79 years of age were asked to identify the six basic emotions at different intensity levels (25%, 50%, 75%, 100%, and 125% of the prototypical emotion). Analyses uncovered differing threshold levels across emotions and sex of facial stimuli, ranging from 50% up to 100% intensities. Using these findings as “healthy population benchmarks”, we propose to apply these threshold levels to clinical populations either as facial emotion recognition or intensity rating tasks. As part of any comprehensive social cognition test battery, this approach should allow for a rapid and sensitive assessment of potential FER deficits. (JINS, 2015, 21, 568–572)
Deficits in facial affect recognition have been repeatedly reported in schizophrenia patients. The hypothesis that this deficit is caused by poorly differentiated cognitive representation of facial expressions was tested in this study. To this end, performance of patients with schizophrenia and controls was compared in a new emotion-rating task. This novel approach allowed the participants to rate each facial expression at different times in terms of different emotion labels. Results revealed that patients tended to give higher ratings to emotion labels that did not correspond to the portrayed emotion, especially in the case of negative facial expressions (p < .001, η2 = .131). Although patients and controls gave similar ratings when the emotion label matched with the facial expression, patients gave higher ratings on trials with "incorrect" emotion labels (ps < .05). Comparison of patients and controls in a summary index of expressive ambiguity showed that patients perceived angry, fearful and happy faces as more emotionally ambiguous than did the controls (p < .001, η2 = .135). These results are consistent with the idea that the cognitive representation of emotional expressions in schizophrenia is characterized by less clear boundaries and a less close correspondence between facial configurations and emotional states.
Previous studies have demonstrated that emotional facial expressions alter temporal judgments. Moreover, while some studies conducted with Parkinson's disease (PD) patients suggest dysfunction in the recognition of emotional facial expression, others have shown a dysfunction in time perception. In the present study, we investigate the magnitude of temporal distortions caused by the presentation of emotional facial expressions (anger, shame, and neutral) in PD patients and controls. Twenty-five older adults with PD and 17 healthy older adults took part in the present study. PD patients were divided into two sub-groups, with and without mild cognitive impairment (MCI), based on their neuropsychological performance. Participants were tested with a time bisection task with standard intervals lasting 400 ms and 1600 ms. The effect of facial emotional stimuli on time perception was evident in all participants, yet the effect was greater for PD-MCI patients. Furthermore, PD-MCI patients were more likely to underestimate long and overestimate short temporal intervals than PD-non-MCI patients and controls. Temporal impairment in PD-MCI patients seem to be mainly caused by a memory dysfunction. (JINS, 2016, 22, 890–899)
Infants’ smiling is considered an expression of affection, and an index of cognitive and socio-emotional development. Despite research advances in this area, there is much to explore on the ontogeny of smiling, its meaning and the context in which it is manifested early in life. This study aimed at: (a) investigating smiling patterns in these different developmental moments in early infancy, (b) analyzing patterns of association between babies’ smiles and their mothers’ affective behaviors, and (c) verifying whether babies can answer contingently, with smiles, to mothers’ affective behaviors. Participants were sixty Brazilian mother-infant dyads. Infants in three age levels (one, three, and five months of age) and their mothers were observed. They were videotaped at home, during 20 minutes in free sessions. The results indicate increase in frequency of infants’ smiling instances across ages (F(2, 59) = 9.18, p < .05), variations in the frequency of maternal behaviors accompanying the variations in infants’ smiling (F(2, 59) = 6.03, p < .05), correlations between infants’ smiling and mothers’ affective behaviors, and contingency between the behaviors of both mothers and infants. It was verified a strong association between mothers’ behavior and their babies’ smiles, emphasizing the importance of affective interactions in early stages of development.
Multiple sclerosis (MS) may be associated with impaired perception of facial emotions. However, emotion recognition mediated by bodily postures has never been examined in these patients. Moreover, several studies have suggested a relation between emotion recognition impairments and alexithymia. This is in line with the idea that the ability to recognize emotions requires the individuals to be able to understand their own emotions. Despite a deficit in emotion recognition has been observed in MS patients, the association between impaired emotion recognition and alexithymia has received little attention. The aim of this study was, first, to investigate MS patient’s abilities to recognize emotions mediated by both facial and bodily expressions and, second, to examine whether any observed deficits in emotions recognition could be explained by the presence of alexithymia. Thirty patients with MS and 30 healthy matched controls performed experimental tasks assessing emotion discrimination and recognition of facial expressions and bodily postures. Moreover, they completed questionnaires evaluating alexithymia, depression, and fatigue. First, facial emotion recognition and, to a lesser extent, bodily emotion recognition can be impaired in MS patients. In particular, patients with higher disability showed an impairment in emotion recognition compared with patients with lower disability and controls. Second, their deficit in emotion recognition was not predicted by alexithymia. Instead, the disease’s characteristics and the performance on some cognitive tasks significantly correlated with emotion recognition. Impaired facial emotion recognition is a cognitive signature of MS that is not dependent on alexithymia. (JINS, 2014, 19, 1–11)