Introduction
Detecting whether the video of a speaking person in frontal head pose corresponds to the accompanying audio track is of interest in numerous multimodal biometrics-related applications. In many practical occasions, the audio and visual modalities may not be in sync; for example, we may observe static faces in images, the camera may be focusing on a nonspeaker, or a subject may be speaking in a foreign language with audio being translated to another language. Spoofing attacks in audiovisual biometric systems also often involve audio and visual data streams that are not in sync. Audiovisual (AV) synchrony indicates consistency between the audio and visual streams and thus the reliability for the segments to belong to the same individual. Such segments could then serve as building blocks for generating bimodal fingerprints of the different individuals present in the AV data, which can be important for security, authentication, and biometric purposes. AV segmentation can also be important for speaker turn detection, as well as automatic indexing and retrieval of different occurrences of a speaker.
The problem of AV synchrony detection has already been considered in the literature. We refer to Bredin and Chollet (2007) for a comprehensive review on this topic, where the authors present a detailed discussion on different aspects of AV synchrony detection, including feature processing, dimensionality reduction, and correspondence detection measures. In that paper, AV synchrony detection is applied to the problem of identity verification, but the authors also mention additional applications in sound source localization, AV sequence indexing, film postproduction, and speech separation.