Book contents
- Frontmatter
- Contents
- Introduction
- 1 Turing the Man
- PART ONE
- PART TWO
- Introduction to Part Two
- 7 The 2008 Reading University Turing Tests
- 8 2012 Tests – Bletchley Park
- 9 Interviews with Elite Machine Developers
- 10 Turing2014: Tests at The Royal Society, June 2014
- 11 The Reaction to Turing2014
- Index
- References
8 - 2012 Tests – Bletchley Park
from PART TWO
Published online by Cambridge University Press: 12 October 2016
- Frontmatter
- Contents
- Introduction
- 1 Turing the Man
- PART ONE
- PART TWO
- Introduction to Part Two
- 7 The 2008 Reading University Turing Tests
- 8 2012 Tests – Bletchley Park
- 9 Interviews with Elite Machine Developers
- 10 Turing2014: Tests at The Royal Society, June 2014
- 11 The Reaction to Turing2014
- Index
- References
Summary
Between the end of the October 2008 experiment at Reading Universityand a special event at Bletchley Park in June 2012, an exciting and historic development took place in the continuing man-versus-machine narrative.
IBM once again produced a machine that beat human champions at their own game, following Deep Blue's defeat of Garry Kasparov.
Back in the late 1990s the analysis of Deep Blue's performance was that it used brute force to look ahead millions of chess moves, but it lacked intelligence. Recall that Turing (1948) had stated “research into intelligence of machinery will probably be very greatly concerned with searches. Is ‘search’ not part of our daily decision-making, even if done in an instant, to decide what the next best move is, no matter what activities we are planning?”.
In February 2011 the Watson machine, named after the IBM's founder Thomas J. Watson, was seen on TV in the US and across the Internet playing a game that involved identifying the correct answer to a clue. In the TV show, IBM presented another ‘super’ machine (see Figure 8.1), the Watson system (Ferrucci et al., 2010). This time, rather than have a machine compete with a human in a chess match, IBM chose a contest featuring natural language: the American general knowledge quiz show Jeopardy! (Baker, 2011).
The IBM team had conceded3 that this was a formidable challenge:
Understanding natural language, what we humans use to communicate with one another every day, is a notoriously difficult challenge for computers. Language to us is simple and intuitive and ambiguity is often a source of humor and not frustration.
Designing the Watson system around a deep search question–answer strategy, the IBM team were fully aware that:
As we humans process language, we pare down alternatives using our incredible abilities to reason based on our knowledge. We also use any context couching the language to further promote certain understandings. These things allow us to deal with the implicit, highly contextual, ambiguous and often imprecise nature of language.
The machine successfully challenged two Jeopardy! masters, Ken Jennings and Brad Rutter in a Final Jeopardy! general knowledge, human-versusmachine, exhibition contest.
- Type
- Chapter
- Information
- Turing's Imitation GameConversations with the Unknown, pp. 128 - 158Publisher: Cambridge University PressPrint publication year: 2016