Published online by Cambridge University Press: 20 May 2010
Imagine entering a room for the very first time (Fig. 26.1) and then being asked to look around and to see what is in it. The first glance already tells you what kind of room it is (in our case it clearly is a scene set in a museum); immediately afterwards you begin to notice objects in the room (the prominent statue in the foreground, several other statues on pedestals, paintings on the walls, etc.). Your attention might be drawn more towards certain objects first and then wander around taking in the remaining objects. Very rarely will your visual system need to pause and take more time to investigate what a particular object is – even more rarely will it not be able to interpret it at all (perhaps the four-horned statue in the back of the room will be confusing at first, but it still can be interpreted as a type of four-legged, hoofed animal). The remarkable ability of the (human) visual system to quickly and robustly assign labels to objects (and events) is called categorization.
The question of how we learn to categorize objects and events has been at the heart of cognitive and neuroscience research for the last decades. At the same time, advances in the field of computational vision – both in terms of the algorithms involved as well as in the capabilites of today's computers – have made it possible to begin to look at how computers might solve the difficult problem of categorization. In this chapter, we will therefore address some of the key challenges in categorization from a combined cognitive and computational perspective.