Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-qxdb6 Total loading time: 0 Render date: 2024-04-27T03:46:36.776Z Has data issue: false hasContentIssue false

10 - Face Tracking and Recognition in a Camera Network

from PART III - HYBRID BIOMETRIC SYSTEMS

Published online by Cambridge University Press:  25 October 2011

Ming Du
Affiliation:
University of Maryland
Aswin C. Sankaranarayanan
Affiliation:
Rice University
Rama Chellappa
Affiliation:
University of Maryland
Bir Bhanu
Affiliation:
University of California, Riverside
Venu Govindaraju
Affiliation:
State University of New York, Buffalo
Get access

Summary

Introduction

Multicamera networks are becoming increasingly common in surveillance applications given their ability to provide persistent sensing over a large area. Opportunistic sensing and nonintrusive acquisition of biometrics, which are useful in many applications, come into play. However, opportunistic sensing invariably comes with a price, namely, a wide range of potential nuisance factors that alter and degrade the biometric signatures of interest. Typical nuisance factors include pose, illumination, defocus blur, motion blur, occlusion, and weather effects. Having multiple views of a person is critical for mitigating some of these degradations.

In particular, having multiple viewpoints helps build more robust signatures because the system has access to more information. For face recognition, having multiple views increases the chances of the person being in a favorable frontal pose. However, to use the multiview information reliably, we need to estimate the pose of the person's head. This could be done explicitly by computing the actual pose of the person to a reasonable approximation, or implicitly by using a view selection algorithm. Solving for the pose of a person's head presents a difficult problem, especially when images have poor resolution and the calibration of cameras (both external and internal) is not sufficiently precise to allow robust multiview fusion. This holds especially true in surveillance applications when the subjects under surveillance often appear in the far- field of the camera.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×