The aim of this study was to explore the effect of rater training on interrater reliability, internal consistency, standard error of measurement, and rater familiarity for the Melbourne Assessment of Unilateral Upper Limb Function. Twenty-four participants (raters) were randomly assigned to either ‘trained’ (n=12) or ‘untrained’ (n=12) interventions; they then scored the same nine video recordings of children completing the instrument. The age range of the children was 5 years 5 months to 12 years; there were six males, three females, all with spastic cerebral palsy (five with quadriparesis and four with hemiparesis); Gross Motor Function Classification System levels were I (n=3), II (n=3), III (n=1), and IV (n=2). All participants were novice occupational therapists and had no previous experience of using the instrument. A significant difference in perceived test familiarity was found after scoring but not before: trained raters scored higher. A significant difference in total scores for all cases was found and in eight of 16 individual item total raw scores. Again, trained raters scored higher. Interrater reliability was high in both groups, except item 6 (untrained). Internal consistency was high in both groups, except items 6 and 9 (untrained). We conclude that training for novice users increases familiarity and results in raters' perceiving higher levels of performance in some items. The Melbourne Assessment has high reliability even for novice users.