Meeting support technology evaluation can broadly be considered to be in three categories, which will be discussed in sequence in this chapter, in terms of goals, methods, and outcomes, following a brief introduction on methodology and undertakings prior to the AMI Consortium (Section 13.1). Evaluation efforts can be technology-centric, focused on determining how specific systems or interfaces performed in the tasks for which they were designed (Section 13.2). Evaluations can also adopt a task-centric view, defining common reference tasks such as fact finding or verification, which directly support cross-comparisons of different systems and interfaces (Section 13.3). Finally, the user-centric approach evaluates meeting support technology in its real context of use, measuring the increase in efficiency and user satisfaction that it brings (Section 13.4).
These aspects of evaluation differ from the component evaluation that accompanies each of the underlying technologies described in Chapters 3 to 10, which is often a black-box evaluation based on reference data and distance metrics (although task-centric approaches have been adopted for summarization evaluation, as shown in Chapter 10). Rather, the evaluation of meeting support technology is a stage in a complex software development process for which the helix model was proposed in Chapter 11. We think back on this process in the light of evaluation undertakings, especially for meeting browsers, at the end of this chapter (Section 13.5).
Approaches to evaluation: methods, experiments, campaigns
The evaluation of meeting browsers, as pieces of software, should be related (at least in theory) to a precise view of the specifications they answer.