Introduction
The detection of image regions and their borders is one of the basic requirements for further (object domain-) image processing in a generalpurpose technical pattern recognition system and, very likely, also in the visual system. It is a pre-requisite for object separation (figure-ground discrimination, and separation of adjoining and intersecting objects), which in turn is necessary for the generation of invariances for object recognition (Reitboeck & Altmann, 1984).
Texture is a powerful feature for region definition. Objects and background usually have different textures; camouflage works by breaking this rule. For texture characterization, Fourier (power) spectra are frequently used in computer pattern recognition. Although the signal transfer properties of visual channels can be described in the spatial (and temporal) frequency domain, there has been no conclusive evidence that pattern processing in the primary visual areas would be in terms of local Fourier spectra.
In the following we propose a model for texture characterization in the visual system, based on region labeling in the time domain via correlated neural events. The model is consistent with several basic operational principles of the visual system, and its texture separation capacity is in very good agreement with the pre-attentive texture separation of humans.
Texture region definition via temporal correlations
When we look at a scene, we can literally generate a ‘matched filter’ and use it to direct our attention to a specific object region.