Skip to main content Accessibility help
×
Hostname: page-component-7bb8b95d7b-l4ctd Total loading time: 0 Render date: 2024-09-27T17:21:33.593Z Has data issue: false hasContentIssue false

6 - Matters Arising from Early Turing Tests

from PART ONE

Published online by Cambridge University Press:  12 October 2016

Kevin Warwick
Affiliation:
Coventry University
Huma Shah
Affiliation:
Coventry University
Get access

Summary

As we already mentioned, to realise Turing's tests is, in the opinion of Hayes and Ford (1995), harmful to the science of AI. We contest this position and feel it is a dereliction of the duty of science whose remit should not be to avoid difficult goals or to appease the sceptical. Science should pursue innovation and advance technology for the benefit of humanity.

If realising Turing's two tests of imitation, deception and intelligence can help us ascertain what does and does not fool people, thus improving deception detection, then this cannot be contrary to the goals of good science. Especially as many researchers (including Block, Pinker, and Shieber) have pointed out and others (Colby et al., Heiser et al., Weizenbaum) have demonstrated through experiments that some intelligent humans are gullible.

The current climate of increasing cybercrime sees opportunists turning to innovative means of defrauding people – stealing their identity, swindling funds – including using text-based chatting across the Internet. So now is a very good time to engineer virtuous artificial conversationalists to counter the attack from malware such as CyberLover. In this chapter we look at some of the common arguments over the Turing test and early Turing test implementations, considering the key questions of duration, knowledge, memory, cultural bias. We begin by asking what if anything is actually being measured.

What is being measured?

Is it intelligence or a type of human intelligence being measured in a Turing test? Turing (1950) believed a sustained level of answering any questions was sufficient to assess a machine's performance in thinking at a satisfactory level. But what then is thinking? To Moor (2004) it is information processing in ways which involve recognition, imagination, evaluation and decision. For Baum (2004) semantics is the concern of thought equivalent to capturing and exploiting the compact structure of the world. Demchenko and Veselov (2008) ask if the proven ability to think shortens the distance between machines and humankind.

For a machine to succeed at providing sustained satisfactory responses in an imitation game these comments imply that a machine would necessarily be able to process information with the sophistication of a normal, living adult human being; that is, the machine must be a consummate actor.

Type
Chapter
Information
Turing's Imitation Game
Conversations with the Unknown
, pp. 81 - 96
Publisher: Cambridge University Press
Print publication year: 2016

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barnden, J.A. (2009). Challenges in natural language processing: the case of metaphor. International Journal of Speech Technology 11, 121–123.Google Scholar
Barnden, J.A. (2010). Metaphor and metonymy: making their connections more slippery. Cognitive Linguistics 21 (1), 1–34.Google Scholar
Barsegyan, A., Mackenzie, S., Kurose, B., McGaugh, J., and Roozendaal, B. (2010). Glucocorticoids in the prefrontal cortex enhance memory consolidation and impair working memory by a common neural mechanism. Proc. Nat. Acad. Sci. (USA) 107, 16655–16600.Google Scholar
Baum, E.B. (2004). What is Thought? MIT Press.
Block, N. (1981). Psychologism and behaviorism. Philosophical Review, 90 (1), 5–43. Reprinted in The Turing Test: Verbal Behavior as the Hallmark of Intelligence, S. Shieber (ed). MIT Press, pp. 229–266.Google Scholar
Colby, K.M.,Weber, S., and Hilf, F.D. (1971). Artificial paranoia. Artificial Intelligence 2, 1–25.Google Scholar
Colby, K.M., Hilf, F.D., Weber, S., and Kraemer, H.C. (1972). Turing-like indistinguishability tests for the validation of a computer simulation of paranoid processes. Artificial Intelligence 3, 199–221.Google Scholar
Copple, T. (2008). Bringing AI to life: putting today's tools and resources to work. In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer R., Epstein, G., Roberts, and G., Beber (eds). Springer, pp. 359–376.
Demchenko, E. and Veselov, V. (2008). Who fools whom? The great mystification, or methodological issues on making fools of human beings. In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer R., Epstein, G., Roberts, and G., Beber (eds). Springer, pp. 447–459.
Dennett, D.C. (2004). Can machines think? In The Turing Test: Verbal Behavior as the Hallmark of Intelligence S., Shieber (ed). MIT Press, pp. 269–292.
Epstein, R. (2008). The quest for a thinking computer. In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer R., Epstein, G., Roberts, and G., Beber (eds). Springer, pp. 1–12.
Fagan, J.F. (2000). A theory of intelligence as processing. Implications for society. Psychology, Public Policy, and Law 6 (1), 168–179.Google Scholar
Fagan, J.F. and Holland, C.R. (2007). Racial equality in intelligence: predictions from a theory of intelligence as processing. Intelligence 35, 319–334.Google Scholar
Fagan, J.F. and Holland, C.R. (2009). Culture-fair prediction of academic achievement. Intelligence 37 (1), 62–67.Google Scholar
French, R.(1990) Subcognition and the limits of the Turing test. Mind 99 (393), 53–65.
Genova, J. (1994). Turing's sexual guessing game. Social Epistemology 8, 313–326.Google Scholar
Goodenough, W.H. 1957. Cultural anthropology and linguistics. In Report of the Seventh Annual Round Table Meeting on Linguistics and Language Study, Paul L., Garvin (ed). Georgetown University Press, pp. 167–173.
Hayes, P. and Ford, K. (1995). Turing test considered harmful. In Proc. 14th Int. Joint Conf. on Artificial Intelligence, Vol. 1. Montreal, August 20–25, pp. 972–977.Google Scholar
Heiser, J.F., Colby, K.M., Fraught, W.S. and Parkison, R.C. (1979). Can psychiatrists distinguish a computer simulation of paranoia from the real thing? The limitation of Turing-like tests as measures of the adequacy of simulations. J. Psychiatric Research 15 (3), 149–162.Google Scholar
Lakoff, , (1994). What is metaphor? In Analogy, Metaphor and Reminding, J.A., Barnden and K.J., Holyoak (eds). Advances in Connectionist and Neural Computation Theory. Intellect Books.
Loebner, H.G. (2010). Some misconceptions regarding the Turing test. In Towards a Comprehensive Intelligence Test (TCIT). Proc. AISB 2010 Symposium, De Montfort University, pp. 50–51.Google Scholar
Moor, J.H. (2004). An analysis of the Turing test. In The Turing Test: Verbal Behavior as the Hallmark of Intelligence S., Shieber (ed). MIT Press, pp. 297-306.
Penrose, R. (1989). The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press.
Pinker, S. (1997). How the Mind Works. Penguin.
Purtill, R.L. (1971). Beating the imitation game. Mind 80 (318), 290–294.Google Scholar
Sacktor, T. (2011). How does PKMζ maintain long-term memory. Nature Reviews Neuroscience 12, 9–15.Google Scholar
Savova, V. and Peshkin, L. (2007). Is the Turing test good enough? The fallacy of resource-unbounded intelligence. In Proc. 20th Int. Joint Conf. on Artificial Intelligence (IJCAI-07), Hyderabad, pp. 545–550.Google Scholar
Shah, H. (2006). Chatterbox challenge 2005: geography of the modern Eliza. In Proceedings of the 3rd Natural Language and Cognitive Science (NLUCS) Workshop, ICEIS, Cyprus, pp. 133–138.Google Scholar
Shah, H. (2010). Deception-detection and machine intelligence in practical Turing tests. PhD thesis, University of Reading.
Shieber, S.M. (2008). The Turing test as interactive proof. Nˆous 41 (4), 686–713.Google Scholar
Soeter, M. and Kindt, M. (2011). Disrupting reconsolidation: pharmacological and behavioral manipulations. Learning and Memory 18, 357–366.Google Scholar
Sterrett, S.G. (2003). Turing's two tests for intelligence. In The Turing Test – the Elusive Standard of Artificial Intelligence, J.H., Moor (ed). Kluwer, pp. 79–97.
Swanson, S.A. (2010). Memory and forgetting: piecing together the molecular puzzle of memory storage. The 2010 Progress Report on Brain Science. The Dana Foundation http://www.dana.org/news/publications/detail. aspx?id=24570.
Tsien, J., Li, M., Osan, R., Chen, G., Lin, L., Wang, P., Frey, S., Frey, J., Zhu, D., Liu, T., Zhao, F., and Kuang, H. (2013). On initial brain activity mapping of episodic and semantic memory code in the hippocampus. Neurobiology of Learning and Memory 105, 200–210.Google Scholar
Turing, A.M. (1950). Computing machinery and intelligence. Mind LIX (236), 433–460.Google Scholar
Wardhaugh, R. (1996). An Introduction to Sociolinguistics, Second Edition. Blackwell.
Warwick, K. and Shah, H. (2014). Assumption of knowledge and the Chinese Room in Turing test interrogation. AI Communications 27 (3), 275–283.Google Scholar
Warwick, K., Shah, H. and Moor, J.H. (2013). Some implications of a sample of practical Turing tests. Minds and Machines 23, 163-177.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×