Introduction
It is probably fair to say that we have not, to this day, formed a clear picture of the learning process; neither have we been able to elicit from artificial intelligence machines a sort of behavior which could possibly compare in flexibility and performance with that exhibited by human or even animal subjects.
Leaving aside the issue of what actually happens in a learning brain, research on the question of how to generate ‘intelligent’ behavior has oscillated between two poles. The first, which today predominates in artificial intelligence circles (Nilsson, 1980), takes it for granted that solving a particular problem entails repeated application, to a data set representing the starting condition, of some operations chosen in a predefined set; the order of application may be either arbitrary or determined heuristically. The task is completed when the data set is found to be in a ‘goal’ state. This approach can be said to ascribe to the system, ‘from birth’, the capabilities required for a successful solution. The second approach, quite popular in its early version (Samuel, 1959), has been favored recently by physicists (Hopfield, 1982; Hogg & Huberman, 1985), and rests on the idea that ‘learning machines’ should be endowed, not with specific capabilities, but with some general architecture, and a set of rules, which are used to modify the machines' internal states in such a way that progressively better performance is obtained upon presentation of successive sample tasks.