Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-18T19:26:25.664Z Has data issue: false hasContentIssue false

25 - Analysis of Small Groups

from Part IV - Applications of Social Signal Processing

Published online by Cambridge University Press:  13 July 2017

Daniel Gatica-Perez
Affiliation:
Idiap Research Institute and EPFL
Oya Aran
Affiliation:
Idiap Research Institute
Dinesh Jayagopi
Affiliation:
IIIT Bangalore
Judee K. Burgoon
Affiliation:
University of Arizona
Nadia Magnenat-Thalmann
Affiliation:
Université de Genève
Maja Pantic
Affiliation:
Imperial College London
Alessandro Vinciarelli
Affiliation:
University of Glasgow
Get access

Summary

Introduction

Teams are key components of organizations and, although complexity and scale are typical features of large institutions worldwide, much of the work is still implemented by small groups. The small-group meeting, where people discuss around the table, is pervasive and quintessential of collaborative work. For many years now, this setting has been studied in computing with the goal of developing methods that automatically analyze the interaction using both the spoken words and the nonverbal channels as information sources. The current literature offers the possibility of inferring key aspects of the interaction, ranging from personal traits to hierarchies and other relational constructs, which in turn can be used for a number of applications. Overall, this domain is rapidly evolving and studied in multiple subdisciplines in computing and engineering as well as the cognitive sciences.

We present a concise review of recent literature on computational analysis of face-toface small-group interaction. Our goal is to provide the reader with a quick pointer to work on analysis of conversational dynamics, verticality in groups, personality of group members, and characterization of groups as a whole, with a focus on nonverbal behavior as information source. The value of the nonverbal channel (including voice, face, and body) to infer high-level information about individuals has been documented at length in psychology and communication (Knapp & Hall, 2009) and is one of the main themes of this volume.

In the chapter, we include pointers to 100 publications appearing in a variety of venues between 2009 and 2013 (discussions about earlier work can be found e.g. in Gatica-Perez, 2009.) After a description of our Methodology (see section on Methodology) and a basic quantitative analysis of this body of literature (see section on the Analysis of Main Trends), we select a few works, due to the limited space, in each of the four aforementioned trends to illustrate the kind of research questions, computational approaches, and current performance available in the literature (see sections on Conversational Dynamics, Verticality, Personality, and Group Characterization). Taken together, the existing research on small-group analysis is diverse in terms of goals and studied scenarios, relies on state-of-the-art techniques for behavioral feature extraction to characterize group members from audio, visual, and other sensor sources, and is still largely using standard machine learning techniques as tools for computational inference of interaction-related variables of interest.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angus, D., Smith, A. E., & Wiles, J. (2012). Human communication as coupled time series: Quantifying multi-participant recurrence.IEEE Transactions on Audio, Speech, and Language Processing, 20(6), 1795–1807.Google Scholar
Aran, O. & Gatica-Perez, D. (2010). Fusing audio-visual nonverbal cues to detect dominant people in small group conversations. In Proceedings of 20th International Conference on Pattern Recognition (pp. 3687–3690).
Aran, O. & Gatica-Perez, D. (2013a). Cross-domain personality prediction: From video blogs to small group meetings. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (pp. 127–130).
Aran, O. & Gatica-Perez, D. (2013b). One of a kind: Inferring personality impressions in meetings. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (pp. 11–18).
Aran, O., Hung, H., & Gatica-Perez, D. (2010). A multimodal corpus for studying dominance in small group conversations. In Proceedings of LREC workshop on Multimodal Corpora Malta.
Ba, S. O. & Odobez, J. M. (2009). Recognizing visual focus of attention from head pose in natural meetings.IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39(1), 16–33.Google Scholar
Ba, S. O. & Odobez, J.-M. (2011a). Multiperson visual focus of attention from head pose and meeting contextual cues.IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 101–116.Google Scholar
Ba, S. O. & Odobez, J. M. (2011b). Multi-person visual focus of attention from head pose and meeting contextual cues.IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1), 101–116.Google Scholar
Bachour, K., Kaplan, F., & Dillenbourg, P. (2010). An interactive table for supporting participation balance in face-to-face collaborative learning.IEEE Transactions on Learning Technologies, 3(3), 203–213.Google Scholar
Baldwin, T., Chai, J. Y., & Kirchhoff, K. (2009). Communicative gestures in coreference identification in multiparty meetings. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 211–218).
Basu, S., Choudhury, T., Clarkson, B., & Pentland, A. (2001). Learning human interactions with the influence model. MIT Media Lab Vision and Modeling, Technical Report 539, June.
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation.Journal of Machine Learning Research, 3, 993–1022.Google Scholar
Bohus, D. & Horvitz, E. (2009). Dialog in the open world: Platform and applications. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 31–38).
Bohus, D. & Horvitz, E. (2011). Decisions about turns in multiparty conversation: From perception to action. In Proceedings of the 13th International Conference on Multimodal Interfaces (pp. 153–160).
Bonin, F., Bock, R., & Campbell, N. (2012). How do we react to context? Annotation of individual and group engagement in a video corpus. In Proceedings of Privacy, Security, Risk and Trust (PASSAT) and International Conference on Social Computing (pp. 899–903).
Bousmalis, K., Mehu, M., & Pantic, M. (2009). Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools. In Proceedings of 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (pp. 1–9).
Bousmalis, K., Mehu, M., & Pantic, M. (2013). Towards the automatic detection of spontaneous agreement and disagreement based on nonverbal behaviour: A survey of related cues, databases, and tools.Image and Vision Computing, 31(2), 203–221.Google Scholar
Bousmalis, K., Morency, L., & Pantic, M. (2011).Modeling hidden dynamics of multimodal cues for spontaneous agreement and disagreement recognition. In Proceedings of IEEE International Conference on Automatic Face Gesture Recognition and Workshops (pp. 746–752).
Bousmalis, K., Zafeiriou, S., Morency, L.-P., & Pantic, M. (2013). Infinite hidden conditional random fields for human behavior analysis.IEEE Transactions on Neural Networks Learning Systems, 24(1), 170–177.Google Scholar
Bruning, B., Schnier, C., Pitsch, K., & Wachsmuth, S. (2012). Integrating PAMOCAT in the research cycle: Linking motion capturing and conversation analysis. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 201–208).
Campbell, N., Kane, J., & Moniz, H. (2011). Processing YUP! and other short utterances in interactive speech. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 5832–5835).
Camurri, A., Varni, G., & Volpe, G. (2009). Measuring entrainment in small groups of musicians. In Proceedings of 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (pp. 1–4).
Carletta, J., Ashby, S., Bourban, S., et al. (2005). The AMI meeting corpus: A pre-announcement. In Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction (pp. 28–39).
Charfuelan, M. & Schroder, M. (2011). Investigating the prosody and voice quality of social signals in scenario meetings. In S, Da Mello, A, Graesser, B, Schuller, & J.-C, Martin (Eds), Affective Computing and Intelligent Interaction(vol. 6974, pp. 46–56). Berlin: Springer.
Charfuelan, M., Schroder, M., & Steiner, I. (2010). Prosody and voice quality of vocal social signals: The case of dominance in scenario meetings. In Proceedings of Interspeech 2010, September, Makuhari, Japan.
Chen, L. & Harper, M. P. (2009). Multimodal floor control shift detection. In Proceedings of the 2009 International Conference on Multiodal Interfaces (pp. 15–22).
Cristani, M., Pesarin, A., Drioli, C., et al. (2011). Generative modeling and classification of dialogs by a low-level turn-taking feature.Pattern Recognition, 44(8), 1785–1800.Google Scholar
Dai, P., Di, H., Dong, L., Tao, L., & Xu, G. (2009). Group interaction analysis in dynamic context.IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, 39(1), 34–42.Google Scholar
Debras, C. & Cienki, A. (2012). Some uses of head tilts and shoulder shrugs during human interaction, and their relation to stancetaking. In Proceedings of Privacy, Security, Risk and Trust (PASSAT), International Conference on Social Computing (pp. 932–937).
De Kok, I. & Heylen, D. (2009). Multimodal end-of-turn prediction in multi-party meetings. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 91–98).
Do, T. & Gatica-Perez, D. (2011). GroupUs: Smartphone proximity data and human interaction type mining. In Proceedings of IEEE International Symposium on Wearable Computers (pp. 21–28).
Dong, W., Lepri, B., Kim, T., Pianesi, F., & Pentland, A. S. (2012). Modeling conversational dynamics and performance in a social dilemma task. In Proceedings of the 5th International Symposium on Communications Control and Signal Processing (pp. 1–4).
Dong, W., Lepri, B., & Pentland, A. (2012). Automatic prediction of small group performance in information sharing tasks. In Proceedings of Collective Intelligence Conference (CoRR abs/1204.3698).
Dong, W., Lepri, B., Pianesi, F., & Pentland, A. (2013). Modeling functional roles dynamics in small group interactions.IEEE Transactions on Multimedia, 15(1), 83–95.Google Scholar
Dong, W. & Pentland, A. (2010). Quantifying group problem solving with stochastic analysis. In Proceedings of International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (pp. 40:1–40:4).
Dunbar, N. E. & Burgoon, J. K. (2005). Perceptions of power and interactional dominance in interpersonal relationships.Journal of Social and Personal Relationships, 22(2), 207–233.Google Scholar
Escalera, S., Pujol, O., Radeva, P., Vitrià, J., & Anguera, M. T. (2010). Automatic detection of dominance and expected interest.EURASIP Journal on Advances in Signal Processing, 1.Google Scholar
Favre, S., Dielmann, A., & Vinciarelli, A. (2009). Automatic role recognition in multiparty recordings using social networks and probabilistic sequential models. In Proceedings of the 17th ACM International Conference on Multimedia (pp. 585–588).
Feese, S., Arnrich, B., Troster, G., Meyer, B., & Jonas, K. (2012). Quantifying behavioral mimicry by automatic detection of nonverbal cues from body motionc. In Proceedings of Privacy, Security, Risk and Trust (PASSAT), International Conference on Social Computing (pp. 520–525).
Gatica-Perez, D. (2006). Analyzing group interactions in conversations: A review. In Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (pp. 41–46).
Gatica-Perez, D. (2009). Automatic nonverbal analysis of social interaction in small groups: A review.Image and Vision Computing(special issue on Human Behavior), 27(12), 1775–1787.Google Scholar
Gatica-Perez, D., Op den Akken, R., & Heylen, D. (2012). Multimodal analysis of small-group conversational dynamics. In S, Renals, H, Bourlard, J, Carletta, & A, Popescu-Belis (Eds), Multimodal Signal Processing: Human Interactions in Meetings. New York: Cambridge University Press.
Germesin, S. & Wilson, T. (2009). Agreement detection in multiparty conversation. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 7–14).
Glowinski, D., Coletta, P., Volpe, G., et al. (2010). Multi-scale entropy analysis of dominance in social creative activities. In Proceedings of the International Conference on Multimedia (pp. 1035–1038).
Gorga, S. & Otsuka, K. (2010). Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection. In Proceedings of International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (art. 54).
Hadsell, R., Kira, Z., Wang, W., & Precoda, K. (2012). Unsupervised topic modeling for leader detection in spoken discourse. In IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 5113–5116).
Hall, J. A., Coats, E. J., & Smith, L. (2005). Nonverbal behavior and the vertical dimension of social relations: A meta-analysis.Psychological Bulletin, 131(6), 898–924.Google Scholar
Hung, H. & Chittaranjan, G. (2010). The IDIAP wolf corpus: Exploring group behaviour in a competitive role-playing game. In Proceedings of the International Conference on Multimedia (pp. 879–882).
Hung, H. & Gatica-Perez, D. (2010). Estimating cohesion in small groups using audio-visual nonverbal behavior.IEEE Transactions on Multimedia, 12(6), 563–575.Google Scholar
Hung, H., Huang, Y., Friedland, G., & Gatica-Perez, D. (2011). Estimating dominance in multiparty meetings using speaker diarization.IEEE Transactions on Audio, Speech & Language Processing, 19(4), 847–860.Google Scholar
Hung, H., Jayagopi, D., Yeo, C., et al. (2007). Using audio and video features to classify the most dominant person in a group meeting. In Proceedings of the 15th ACM International Conference on Multimedia (pp. 835–838).
Ishizuka, K., Araki, S., Otsuka, K., Nakatani, T., & Fujimoto, M. (2009). A speaker diarization method based on the probabilistic fusion of audio-visual location information. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 55–62).
Jayagopi, D. B. & Gatica-Perez, D. (2009). Discovering group nonverbal conversational patterns with topics. In Proceedings of the International Conference on Multimodal Interfaces (pp. 3– 6).
Jayagopi, D. B. & Gatica-Perez, D. (2010). Mining group nonverbal conversational patterns using probabilistic topic models.IEEE Transactions on Multimedia, 12(8), 790–802.Google Scholar
Jayagopi, D. B., Hung, H., Yeo, C., & Gatica-Perez, D. (2009). Modeling dominance in group conversations from nonverbal activity cues.IEEE Transactions on Audio, Speech, and Language Processing(special issue on Multimodal Processing for Speech-based Interactions), 17(3), 501–513.Google Scholar
Jayagopi, D., Raducanu, B., & Gatica-Perez, D. (2009). Characterizing conversational group dynamics using nonverbal behavior. In Proceedings of the International Conference on Multimedia (pp. 370–373).
Jayagopi, D., Sanchez-Cortes, D., Otsuka, K., Yamato, J., & Gatica-Perez, D. (2012). Linking speaking and looking behavior patterns with group composition, perception, and performance. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 433– 440).
Kalimeri, K., Lepri, B., Aran, O., et al. (2012). Modeling dominance effects on nonverbal behaviors using granger causality. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 23–26).
Kalimeri, K., Lepri, B., Kim, T., Pianesi, F., & Pentland, A. (2011). Automatic modeling of dominance effects using granger causality. In A. A, Salah, & B, Lepri (Eds), Human Behavior Understanding (vol. 7065, pp. 124–133). Berlin: Springer.
Kim, S., Filippone, M., Valente, F., & Vinciarelli, A. (2012). Predicting the conflict level in television political debates: An approach based on crowdsourcing, nonverbal communication and Gaussian processes. In Proceedings of the 20th ACM International Conference on Multimedia (pp. 793–796).
Kim, S., Valente, F., & Vinciarelli, A. (2012). Automatic detection of conflicts in spoken conversations: Ratings and analysis of broadcast political debates. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 5089–5092).
Kim, T. & Pentland, A. (2009). Understanding effects of feedback on group collaboration. Association for the Advancement of Artificial Intelligence, Spring Symposium (pp. 25–30).
Knapp, M. L. & Hall, J. A. (2009). Nonverbal Communication in Human Interaction (7 edn). Boston: Wadsworth Publishing.
Kumano, S., Otsuka, K., Mikami, D., & Yamato, J. (2009). Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 99–106).
Kumano, S., Otsuka, K., Mikami, D., & Yamato, J. (2011). Analysing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings. In Proceedings of IEEE International Conference on Automatic Face Gesture Recognition and Workshops (pp. 43–50).
La Fond, T., Roberts, D., Neville, J., Tyler, J., & Connaughton, S. (2012). The impact of communication structure and interpersonal dependencies on distributed teams. In Proceedings of Privacy, Security, Risk and Trust (PASSAT), International Conference on Social Computing (pp. 558–565).
Lepri, B., Kalimeri, K., & Pianesi, F. (2010). Honest signals and their contribution to the automatic analysis of personality traits – a comparative study. In A. A, Salah, T, Gevers, N, Sebe, & A, Vinciarelli, (Eds), Human Behavior Understanding (vol. 6219, pp. 140–150. Berlin: Springer.Google Scholar
Lepri, B., Mana, N., Cappelletti, A., & Pianesi, F. (2009). Automatic prediction of individual performance from “thin slices” of social behavior. In Proceedings of the 17th ACM International Conference on Multimedia (pp. 733–736).
Lepri, B., Mana, N., Cappelletti, A., Pianesi, F., & Zancanaro, M. (2009). Modeling the personality of participants during group interactions. In Proceedings of Adaptation and Personalization UMAP 2009, 17th International Conference on User Modeling (pp. 114–125).
Lepri, B., Ramanathan, S., Kalimeri, K., et al. (2012). Connecting meeting behavior with extraversion – a systematic study.IEEE Transactions on Affective Computing, 3(4), 443–455.Google Scholar
Lepri, B., Subramanian, R., Kalimeri, K., et al. (2010). Employing social gaze and speaking activity for automatic determination of the extraversion trait. In Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (pp. 7:1–7:8).
Nakano, Y. & Fukuhara, Y. (2012). Estimating conversational dominance in multiparty interaction. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 77–84).
Noulas, A., Englebienne, G., & Krose, B. J. A. (2012). Multimodal speaker diarization.IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1), 79–93.Google Scholar
Olguin Olguin, D., Waber, B. N., Kim, T., et al. (2009). Sensible organizations: Technology and methodology for automatically measuring organizational behavior.IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, 39(1), 43–55.Google Scholar
Otsuka, K., Araki, S., Mikami, D., et al. (2009). Realtime meeting analysis and 3D meeting viewer based on omnidirectional multimodal sensors. In Proceedings of the 2009 International Conference on Multimodal Interfaces (pp. 219–220).
Otsuka, Y. & Inoue, T. (2012). Designing a conversation support system in dining together based on the investigation of actual party. In Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (pp. 1467–1472).
Pesarin, A., Cristani, M., Murino, V., & Vinciarelli, A. (2012). Conversation analysis at work: Detection of conflict in competitive discussions through automatic turn-organization analysis.Cognitive Processing, 13(2), 533–540.Google Scholar
Pianesi, F. (2013). Searching for personality.IEEE Signal Processing Magazine, 30(1), 146–158.Google Scholar
Poggi, I. & D’Errico, F. (2010). Dominance signals in debates. In A. A, Salah, T, Gevers, N, Sebe, & A, Vinciarelli (Eds), Human Behavior Understanding (vol. 6219, pp. 163–174). Berlin: Springer.
Prabhakar, K. & Rehg, J. M. (2012). Categorizing turn-taking interactions. In A, Fitzgibbon, S, Lazebnik, P, Perona, Y, Sato, & C, Schmid (Eds), European Conference on Computer Vision (vol. 7576, pp. 383–396). Berlin: Springer.
Raducanu, B. & Gatica-Perez, D. (2009). You are fired! nonverbal role analysis in competitive meetings. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1949–1952).
Raducanu, B. & Gatica-Perez, D. (2012). Inferring competitive role patterns in reality TV show through nonverbal analysis.Multimedia Tools and Applications, 56(1), 207–226.Google Scholar
Raiman, N., Hung, H., & Englebienne, G. (2011).Move, and I will tell you who you are: Detecting deceptive roles in low-quality data. In Proceedings of the 13th International Conference on Multimodal Interfaces (pp. 201–204).
Ramanathan, V., Yao, B., & Fei-Fei, L. (2013). Social role discovery in human events. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 2475–2482).
Rehg, J. M., Fathi, A., & Hodgins, J. K. (2012). Social interactions: A first-person perspective. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1226– 1233).
Rienks, R. J. & Heylen, D. (2005). Automatic dominance detection in meetings using easily detectable features. In Proceedings of Workshop Machine Learning for Multimodal Interaction, Edinburgh
Salamin, H., Favre, S., & Vinciarelli, A. (2009). Automatic role recognition in multiparty recordings: Using social affiliation networks for feature extraction.IEEE Transactions on Multimedia, 11(7), 1373–1380.Google Scholar
Salamin, H. & Vinciarelli, A. (2012). Automatic role recognition in multi-party conversations: An approach based on turn organization, prosody and conditional random fields.IEEE Transactions on Multimedia, 13(2), 338–345.Google Scholar
Salamin, H., Vinciarelli, A., Truong, K., & Mohammadi, G. (2010). Automatic role recognition based on conversational and prosodic behaviour. In Proceedings of the International Conference on Multimedia (pp. 847–850).
Sanchez-Cortes, D., Aran, O., & Gatica-Perez, D. (2011). An audio visual corpus for emergent leader analysis. In Proceedings of Workshop on Multimodal Corpora for Machine Learning: Taking Stock and Road Mapping the Future, November.
Sanchez-Cortes, D., Aran, O., Jayagopi, D. B., Schmid Mast, M., & Gatica-Perez, D. (2012). Emergent leaders through looking and speaking: From audio-visual data to multimodal recognition.Journal on Multimodal User Interfaces, 7(1–2), 39–53.Google Scholar
Sanchez-Cortes, D., Aran, O., Schmid Mast, M., & Gatica-Perez, D. (2010). Identifying emergent leadership in small groups using nonverbal communicative cues. In Proceedings of the 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction (art. 39).
Sanchez-Cortes, D., Aran, O., Schmid Mast, M., & Gatica-Perez, D. (2012). A nonverbal behavior approach to identify emergent leaders in small groups.IEEE Transactions on Multimedia, 14(3), 816–832.Google Scholar
Sapru, A. & Bourlard, H. (2013). Automatic social role recognition in professional meetings using conditional random fields. In: Proceedings of 14th Annual Conference of the International Speech Communication Association (pp. 1530–1534).
Schoenenberg, K., Raake, A., & Skowronek, J. (2011). A conversation analytic approach to the prediction of leadership in two- to six-party audio conferences. In Proceedings of Third International Workshop on Quality of Multimedia Experience (pp. 119–124).
Song, Y., Morency, L.-P., & Davis, R. (2012). Multimodal human behavior analysis: Learning correlation and interaction across modalities. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 27–30).
Staiano, J., Lepri, B., Kalimeri, K., Sebe, N., & Pianesi, F. (2011). Contextual modeling of personality states’ dynamics in face-to-face interactions. In Proceedings of Security Risk And Trust (PASSAT), IEEE Third International Conference on Social Computing Privacy (pp. 896–899).
Staiano, J., Lepri, B., Ramanathan, S., Sebe, N., & Pianesi, F. (2011). Automatic modeling of personality states in small group interactions. In Proceedings of the 19th ACM International Conference on Multimedia (pp. 989–992).
Stein, R. T. (1975). Identifying emergent leaders from verbal and nonverbal communications.Personality and Social Psychology, 32(1), 125–135.Google Scholar
Subramanian, R., Staiano, J., Kalimeri, K., Sebe, N., & Pianesi, F. (2010). Putting the pieces together: Multimodal analysis of social attention in meetings. In Proceedings of the International Conference on Multimedia (pp. 659–662).
Sumi, Y., Yano, M., & Nishida, T. (2010). Analysis environment of conversational structure with nonverbal multimodal data. In Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (pp. 44:1–44:4).
Suzuki, N., Kamiya, T., Umata, I., et al. (2013). Detection of division of labor in multiparty collaboration. In Proceedings of the 15th International Conference on Human Interface and the Management of Information: Information and Interaction for Learning, Culture, Collaboration and Business (pp. 362–371).
Valente, F. & Vinciarelli, A. (2010). Improving Speech Processing through social signals: Automatic speaker segmentation of political debates using role based turn-taking patterns. In Proceedings of the International Workshop on Social Signal Processing (pp. 29–34).
Varni, G., Volpe, G., & Camurri, A. (2010). A system for real-time multi-modal analysis of nonverbal affective social interaction in user-centric media.IEEE Transactions on Multimedia, 12(6), 576–590.Google Scholar
Vinciarelli, A. (2009). Capturing order in social interactions.IEEE Signal Processing Magazine, 26, 133–152.Google Scholar
Vinciarelli, A., Salamin, H., Mohammadi, G., & Truong, K. (2011). More than words: Inference of socially relevant information from nonverbal vocal cues in speech.Lecture Notes in Computer Science, 6456, 24–33.Google Scholar
Vinciarelli, A., Valente, F., Yella, S. H., & Sapru, A. (2011). Understanding social signals in multi-party conversations: Automatic recognition of socio-emotional roles in the AMI meeting corpus. In Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (pp. 374–379).
Vinyals, O., Bohus, D., & Caruana, R. (2012). Learning speaker, addressee and overlap detection models from multimodal streams. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 417–424).
Voit, M. & Stiefelhagen, R. (2010). 3D user-perspective, voxel-based estimation of visual focus of attention in dynamic meeting scenarios. In Proceedings of International Conference on Multimodal Interfaces and theWorkshop on Machine Learning for Multimodal Interaction (pp. 51:1–51:8)
Wang, W., Precoda, K., Hadsell, R., et al. (2012). Detecting leadership and cohesion in spoken interactions. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 5105–5108).
Wang, W., Precoda, K., Richey, C., & Raymond, G. (2011). Identifying agreement/disagreement in conversational speech: A cross-lingual study. In Proceedings of the Annual Conference of the International Speech Communication Association (pp. 3093–3096).
Wilson, T. & Hofer, G. (2011). Using linguistic and vocal expressiveness in social role recognition. In Proceedings of the International Conference on Intelligent User Interfaces (pp. 419– 422).
Wöllmer, M., Eyben, F., Schuller, B. & Rigoll, G. (2012). Temporal and situational context modeling for improved dominance recognition in meetings. In Proceedings of 13th Annual Conference of the International Speech Communication Association (pp. 350–353).
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups.Science, 330(6004), 686–688.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×