The central nervous system (CNS) integrates information from multiple sensory modalities, including visual and proprioceptive information, when planning a reaching movement (Jeannerod, 1988). Although visual and proprioceptive information regarding hand (or end point effector) position are not always consistent, performance is typically better under reaching conditions in which both sources of information are available. Under certain task conditions, visual signals tend to dominate such that one relies more on visual information than proprioception to guide movement. For example, individuals reaching to a target with misaligned visual feedback of the hand, as experienced when reaching in a virtual reality environment or while wearing prism displacement goggles, adjust their movements in order for the visual representation of the hand to achieve the desired end point even when their actual hand is elsewhere in the workspace (Krakauer et al., 1999, 2000; Redding and Wallace, 1996; Simani et al., 2007). This motor adaptation typically occurs rapidly, reaching baseline levels within twenty trials per target, and without participants' awareness (Krakauer et al., 2000). Furthermore, participants reach with these adapted movement patterns following removal of the distortion, and hence show aftereffects (Baraduc and Wolpert, 2002; Buch et al., 2003; Krakauer et al., 1999, 2000; Martin et al., 1996). These aftereffects provide a measure of motor learning referred to as visuomotor adaptation and result from the CNS learning a new visuomotor mapping to guide movement.