Introduction: Kuhnian paradigms in AI
Is it helpful or revealing to see the state of AI in, perhaps over-fashionable, Kuhnian (Kuhn, 1962) terms? In the Kuhnian view of things, scientific progress comes from social crisis: there are pre-paradigm sciences struggling to develop to the state of “normal science” in which routine experiments are done within an overarching theory that satisfies its adherents, and without daily worry about the adequacy of the theory.
At the same time, there will be other scientific theories under threat, whose theory is under pressure from either discontinuing instances or fundamental doubts about its foundations. In these situations, normal science can continue if the minds of adherents to the theory are closed to possible falsification until some irresistible falsifying circumstances arise, by accretion or by the discovery of a phenomenon that can no longer be ignored.
There is much that is circular in this (the notion of “irresistible,” for example) and there may be doubts as to whether AI is fundamentally science or engineering (we return to this below). But we may assume, for simplicity, that even if AI were engineering, similar social descriptions of its progress might apply (see Duffy, 1984).
Does AI show any of the signs of normality or crisis that would put it under one of those Kuhnian descriptions, and what would follow if that were so? It is easy to find normality: the production of certain kinds of elementary expert system (ES) within commercial software houses and other companies.