Published online by Cambridge University Press: 05 July 2012
The previous chapter (Chapter 11) explained how user requirements directed our development of meeting support technology, more specifically meeting browsers and assistants. Chapters 3 to 9 discussed the enabling components, i.e. the multimodal signal processing necessary to build meeting support technology. In the following, we will present an overview of the meeting browsers and assistants developed both in AMI and related projects, as well as outside this consortium.
Face-to-face meetings are a key method by which organizations create and share knowledge, and the last 20 years have seen the development of new computational technology to support them.
Early research on meeting support technology focused on group decision support systems (Poole and DeSanctis, 1989), and on shared whiteboards and large displays to promote richer forms of collaboration (Mantei, 1988, Moran et al., 1998, Olson et al., 1992, Whittaker and Schwarz, 1995, Whittaker et al., 1999). There were also attempts at devising methods for evaluating these systems (Olson et al., 1992). Subsequent research was inspired by ubiquitous computing (Streitz et al., 1998, Yu et al., 2000), focusing on direct integration of collaborative computing into existing work practices and artifacts. While much of this prior work has addressed support for real-time collaboration by providing richer interaction resources, another important research area is interaction capture and retrieval.
Interaction capture and retrieval is motivated by the observation that much valuable information exchanged in workplace interactions is never recorded, leading people to forget key decisions or repeat prior discussions.