Due to the complexity of teleoperation tasks, human operators figure in the teleoperator perception-decision-control loop. The operator needs an interactive system to handle the huge flow of data between himself and the teleoperator.
The scene represented by the robot and its environment is viewed by one or more cameras. However, the video image may be degraded in extreme environments (underwater, space, etc.) or simply inadequate (2-D image).
In this paper we describe the visual perception aids based of the scene, and more specifically how these are generated by the method we put forward. The system developed at the LRE superimposes a 3-D synthetic image onto the video picture, and animates the scene in real-time on the basis of sensor information feedback. The graphic image can be generated from models, if the objects are known, otherwise interactively, with the cooperation of the operator if the objects are completely unknown. Experiments show that these graphic aids improve the operator's performance in task execution.