In 1950, as a small part of his seminal article “Computing Machinery and Intelligence”, the great British mathematician, logician, computer pioneer, artificial-intelligence founder, and philosopher Alan Mathison Turing wrote out two tiny but marvelously thought-provoking snippets of a hypothetical human-machine dialogue illustrating what he termed the “imitation game” (and which later became known as the Turing Test). Turing presumably believed that those snippets were sufficiently complex to suggest to an average reader all of the fantastic machinery that underlies our full human use of language. To me, Turing's snippets indeed had that effect, but the sad thing is that many people in the ensuing decades read those short imaginary teletype-mediated dialogues and mistakenly thought that very simplistic mechanisms could do all that was shown there, from which they drew the conclusion that even if some Alprogram passed the full Turing Test, it might still be nothing but a patchwork of simple-minded tricks, as lacking in understanding or semantics as is a cash register or an automobile transmission. It is amazing to me that anyone could draw such a preposterous conclusion, but the fact is, it is a very popular view.
Innumerable arcane philosophical debates have been inspired by Turing's proposed Test, and yet I have never seen anyone descend to the mundane, concrete level that Turing himself did, and really spell out an example of what genuine human-level machine intelligence might look like on the screen. I think concrete examples are always needed before arcane arguments are bandied about. And therefore, my attempted “favor” for Alan Mathison Turing consists in having worked out a much longer hypothetical dialogue between a human and a machine, a dialogue that I hope is quite in the spirit of Turing's two little snippets, but that is intended to make far more explicit than he did the degree of complexity and depth that must exist behind the linguistic façade.
As you read the dialogue that follows, it would be good to keep in mind the tacit implication by anti-AI philosopher John Searle in his writings featuring the so-called “Chinese Room” thought-experiment that computers that deal with language — even ones that might someday pass the Turing Test in its full glory, which of course the following hypothetical program certainly would seem to be able to do — necessarily stay stuck on its surface, solely playing syntactic games,[…]