Book contents
- Frontmatter
- Dedication
- Contents
- List of contributors
- List of acronyms
- Part I Introduction
- Part II Devices and materials
- Part III Systems and Applications
- 11 Characterization of Micro-optics
- 12 Photonic Crystals
- 13 MEMS Scanners for OCT Applications
- 14 Liquid Crystal Elastomer Micro-optics
- 15 Adaptive Scanning Micro-eye
- 16 Hyperspectral Eye
- 17 Plenoptic Cameras
- Index
- References
17 - Plenoptic Cameras
from Part III - Systems and Applications
Published online by Cambridge University Press: 05 December 2015
- Frontmatter
- Dedication
- Contents
- List of contributors
- List of acronyms
- Part I Introduction
- Part II Devices and materials
- Part III Systems and Applications
- 11 Characterization of Micro-optics
- 12 Photonic Crystals
- 13 MEMS Scanners for OCT Applications
- 14 Liquid Crystal Elastomer Micro-optics
- 15 Adaptive Scanning Micro-eye
- 16 Hyperspectral Eye
- 17 Plenoptic Cameras
- Index
- References
Summary
Plenoptic cameras are hybrid imaging systems combining a microlens array with a larger objective lens. In this chapter, we examine the optical limits of these systems and describe techniques to expand them. Before we do so, we start with providing an overview of the history, terminology and capabilities of plenoptic cameras.
History of Light Field Capturing
Plenoptic imaging was invented multiple times during the history of photography, each time changing its name and physical form. Initially, it was called integral photography. The first system was realised with photographic film by the Franco-Luxembourgian scientist and inventor Gabriel Lippmann at the Sorbonne (Lippmann 1908). An integral camera is equipped with multiple lenses arranged side by side in a square grid. Each lens creates a unique image on the film. Each image is slightly different from that of its neighbour. Like in stereoscopy, this difference is caused by the relative displacement of the lenses, an effect known as parallax.
In Lippmann's time, the production and alignment of the lenses was a manual and error-prone process. Imaging quality and light sensitivity were poor. Hence, Lippmann and his assistants could only build lab prototypes. Because no computing technology to process the images was available, the developed film was used as a projection slide, with another array of lenses as the imaging optics. Projection superimposed the individual images optically to form an integral image, which gives the process its name. Rather than being projected on a screen, the image is viewed directly. In the integral image, the observer could perceive depth in the recorded scene through stereoscopy and motion parallax. Throughout the twentieth century, Lippmann's idea was re-discovered every few decades (Ives 1930, Dudnikov 1970), advancing theory and manufacturing technology each time. The current renaissance started in the 1990s and was made possible by contributions from different fields of science and engineering, each with its unique terminology.
Now, light field is the commonly accepted term for the light quantities recorded from a scene. It is an abstraction of the electromagnetic field that describes both intensity and directional distribution of light for every three-dimensional (3D) point in space, but discards polarisation and phase. The term was popularised in computer graphics by Levoy & Hanrahan (1996). Light fields were first applied as a more robust starting point to the problem of view interpolation in image based rendering (Shum & Kang 2000).
- Type
- Chapter
- Information
- Tunable Micro-optics , pp. 417 - 438Publisher: Cambridge University PressPrint publication year: 2015