Visual categorization of objects has captured the attention of the vision community for decades (Dickinson 2008). The increased popularity of the problem witnessed in recent years and the advent of powerful computer hardware have led to a seeming success of categorization approaches on the standard datasets such as Caltech-101 (Fei-Fei et al. 2004). However, the high level of discrepancy between the accuracy of object classification and detection/segmentation (Everingham et al. 2007) suggests that the problem still poses a significant and open challenge. The recent preoccupation with tuning the approaches to specific datasets might have averted attention from the most crucial issue: the representation (Edelman and Intrator 2004).
This chapter focuses on what we believe are two central representational design principles: a hierarchical organization of categorical representations, or, more specifically, the principles of hierarchical compositionality and statistical, bottom-up learning.
Given images of complex scenes, objects must be inferred from the pixel information through some recognition process. This requires an efficient and robust matching of the internal object representation against the representation produced from the scene. Despite the seemingly effortless performance of human perception, the diversity and the shear number of visual object classes appearing at various scales, 3-D views, and articulations have placed a great obstacle to the task. In fact, it has been shown by Tsotsos (1990) that the unbounded visual search is NP complete; thus, approximate, hierarchical solutions might be the most promising or plausible way to tackle the problem. This line of architecture is also consistent with the findings on biological systems (Rolls and Deco 2002; Connor et al. 2007).