Measuring a driver's level of attention and drowsiness is fundamental to reducing the number of traffic accidents that often involve bus and truck drivers, who must work for long periods of time under monotonous road conditions. Determining a driver's state of alert in a noninvasive way can be achieved using computer vision techniques. However, two main difficulties must be solved in order to measure drowsiness in a robust way: first, detecting the driver's face location despite variations in pose or illumination; secondly, recognizing the driver's facial cues, such as blinks, yawns, and eyebrow rising. To overcome these challenges, our approach combines the well-known Viola–Jones face detector with the motion analysis of Shi–Tomasi salient features within the face. The location of the eyes and blinking is important to refine the tracking of the driver's head and compute the so-called PERCLOS, which is the percentage of time the eyes are closed over a given time interval. The latter cue is essential for noninvasive driver's alert state estimation as it has a high correlation with drowsiness. To further improve the location of the eyes under different conditions of illumination, the proposed method takes advantage of the high reflectivity of the retina to near infrared illumination employing a camera with an 850 nm wavelength filter. The paper shows that motion analysis of the salient points, in particular cluster mass centers and spatial distributions, yields better head tracking results compared to the state-of-the-art and provides measures of the driver's alert state.