This chapter describes agents, based on the ACT-R cognitive architecture, which operate in real robotic and virtual synthetic domains. The virtual and robotic task domains discussed here share nearly identical challenges from the agent modeling perspective. Most importantly, these domains involve agents that interact with humans and each other in real-time in a three-dimensional space. This chapter describes a unified approach to developing ACT-R agents for these environments that takes advantage of the synergies presented by these environments.
In both domains, agents must be able to perceive the space they move through (i.e., architecture, terrain, obstacles, objects, vehicles, etc.). In some cases the information available fromperception is raw sensor data, whereas in other cases it is at a much higher level of abstraction. Similarly, in both domains actions can be specified and implemented at a very low level (e.g., through the movement of individual actuators or simulated limbs) or at a much higher level of abstraction (e.g., moving to a particular location, which depends on other low-level actions).
Controlling programs for both robots and synthetic agents must operate on some representation of the external environment that is created through the processing of sensory input. Thus, the internal robotic representation of the external world is in effect a simulated virtual environment. Many of the problems in robotics then hinge on being able to create a sufficiently rich and abstract internal representation of the world from sensor data that captures the essential nuances necessary to perceive properly (e.g., perceiving a rock rather than a thousand individual pixels from a camera sensor bitmap) and a sufficiently abstract representation of actions to allow it to act properly.