Book contents
18 - Further methods
from Part VI - See, edit, reconstruct
Published online by Cambridge University Press: 05 November 2012
Summary
An artificial neural network, or just net, may be thought of firstly in pattern recognition terms, say converting an input vector of pixel values to a character they purport to represent. More generally, a permissible input vector is mapped to the correct output, by a process in some way analogous to the neural operation of the brain (Figure 18.1). In Section 18.1 we work our way up from Rosenblatt's Perceptron, with its rigorously proven limitations, to multilayer nets which in principle can mimic any input–output function. The idea is that a net will generalise from suitable input–output examples by setting free parameters called weights.
In Section 18.2 the nets are mainly self-organising, in that they construct their own categories of classification. We include learning vector quantisation and the topologically based Kohonen method. Related nets give an alternative view of Principal Component Analysis. In Section 18.3 Shannon's extension of entropy to the continuous case opens up the criterion of Linsker (1988) that neural network weights should be chosen to maximise mutual information between input and output. We include a 3D image processing example due to Becker and Hinton (1992). Then the further Shannon theory of rate distortion is applied to vector quantization and the LBG quantiser.
In Section 18.4 we begin with the Hough Transform and its widening possibilities for finding arbitrary shapes in an image. We end with the related idea of tomography, rebuilding an image from projections.
- Type
- Chapter
- Information
- Mathematics of Digital ImagesCreation, Compression, Restoration, Recognition, pp. 757 - 831Publisher: Cambridge University PressPrint publication year: 2006