Published online by Cambridge University Press: 05 July 2012
A primary consideration when designing a room or system for meeting data capture is of course how to best capture audio of the conversation. Technology systems requiring voice input have traditionally relied on close-talking microphones for signal acquisition, as they naturally provide a higher signal-to-noise ratio (SNR) than single distant microphones. This mode of acquisition may be acceptable for applications such as dictation and single-user telephony, however as technology heads towards more pervasive applications, less constraining solutions are required to capture natural spoken interactions.
In the context of group interactions in meeting rooms, microphone arrays (or more generally, multiple distant microphones) present an important alternative to close-talking microphones. By enabling spatial filtering of the sound field, arrays allow for location-based speech enhancement, as well as automatic localization and tracking of speakers. The primary benefit of this is to enable non-intrusive hands-free operation: that is, users are not constrained to wear headset or lapel microphones, nor do they need to speak directly into a particular fixed microphone.
Beyond just being a front-end voice enhancement method for automatic speech recognition, the audio localization capability of arrays offers an important cue that can be exploited in systems for speaker diarization, joint audio-visual person tracking, analysis of conversational dynamics, as well as in user interface elements for browsing meeting recordings.
For all these reasons, microphone arrays have become an important enabling technology in academic and commercial research projects studying multimodal signal processing of human interactions over the past decade (three examples of products are shown in Figure 3.1).