Learning by imitation is not a unitary task; thus understanding its various levels in order to potentiate imitative learning is daunting, whether subjects are human, animal or inanimate. For some syndromes arising in childhood (e.g. autism), lack of imitative ability (i.e. excluding rote, meaningless, often involuntary mimetic behavior, e.g. echolalia) is a defining characteristic (Hobson and Lee, 1999; Williams et al., 2001). What constitutes imitation in animals is still under study (Hurley and Chater, 2005; Nehaniv and Dautenhahn, this volume). And computers (and, by inference, robots using computer intelligence) may be “smart” in terms of brute processing power, but their learning is limited to what is easily programmed. Most computers and robots are presently analogous to living systems trained in relatively simple conditioned stimulus-response paradigms: given specific input parameters (that can, of course, be numerous and diverse), computers quickly and efficiently produce a predetermined, correct output (Pepperberg, 2001); computers can, however, generally solve only those new problems that are similar to ones they have already been programmed to solve. And although connectionist models are making significant advances, allowing generalization beyond training sets for a number of individual problems (e.g. Schlesinger and Parisi, 2004), at present most computational mechanisms – and autistic children – cannot learn in ways normal humans do. No one model, for example, does all the following: form new abstract representations, manipulate these representations to attain concrete goals, transfer information acquired in one domain to manage tasks in another, integrate new and existing disparate forms of knowledge (e.g. linguistic, contextual, emotional) to solve novel problems – nor can any one model achieve these behavior patterns through imitation in ways managed by every normal young child (see Lieberman, 2001).