Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-18T15:30:22.026Z Has data issue: false hasContentIssue false

Responsibility and Decision Making in the Era of Neural Networks*

Published online by Cambridge University Press:  13 January 2009

William Bechtel
Affiliation:
Philosophy, Neuroscience, and Psychology, Washington University in St. Louis

Extract

Many of the mathematicians and scientists who guided the development of digital computers in the late 1940s, such as Alan Turing and John von Neumann, saw these new devices not just as tools for calculation but as devices that might employ the same principles as are exhibited in rational human thought. Thus, a subfield of what came to be called computer science assumed the label artificial intelligence (AI). The idea of building artificial systems which could exhibit intelligent behavior comparable to that of humans (which could, e.g., recognize objects, solve problems, formulate and implement plans, etc.) was a heady prospect, and the claims made on behalf of AI during the 1950s and 1960s were impossibly ambitious (e.g., having a computer capture the world chess championship within a decade). Despite some theoretical and applied successes within the field, serious problems soon became evident (of which the most notorious is the frame problem, which involves the difficulty in determining which information about the environment must be changed and which must be kept constant in the face of new information). Instead of fulfilling the goal of quickly producing artificial intelligent agents which could compete with or outperform human beings, by the 1970s and 1980s AI had settled into a pattern of slower but real progress in modeling or simulating aspects of human intelligence. (Examples of the advances made during this period were the development of higher-level structures for encoding information, such as frames or scripts, which were superior to simple prepositional encodings in supporting reasoning or the understanding of natural [as opposed to computer or other artificial] language texts, and the development of procedures for storing information about previously encountered cases and invoking these cases in solving new problems.)

Type
Research Article
Copyright
Copyright © Social Philosophy and Policy Foundation 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 The term “artificial intelligence” was apparently invented by John McCarthy in the context of a seminal conference at Dartmouth College in 1956. For accounts of the early history of AI, see McCorduck, Pamela, Machines Who Think (San Francisco: W. H. Freeman, 1979)Google Scholar; and Gardner, Howard, The Mind's New Science: A History of the Cognitive Revolution (New York: Basic Books, 1985).Google Scholar

2 McCarthy, John and Hayes, Patrick J., “Some Philosophical Problems from the Standpoint of Artificial Intelligence,” in Machine Intelligence, ed. Meltzer, B. and Michie, D. (Edinburgh: Edinburgh University Press, 1969), pp. 463502.Google Scholar For a fairly recent review of work on the frame problem, see Ford, K. M. and Hayes, P. J., eds., Reasoning Agents in a Dynamic World: The Frame Problem (London: JAI Press, 1991).Google Scholar

3 Schank, Roger C. and Abelson, Robert P., Scripts, Plans, Goals, and Understanding (Hillsdale, NJ: Lawrence Erlbaum, 1977)Google Scholar; and Minsky, Marvin, “A Framework for Representing Knowledge,” in The Psychology of Human Vision, ed. Winston, P. H. (New York: McGraw-Hill, 1975).Google Scholar

4 Schank, Roger C. and Reisbeck, Christopher K., Inside Case-based Reasoning (Hillsdale, NJ: Lawrence Erlbaum, 1989).Google Scholar

5 Buchanan, Bruce G. and Shortliffe, Edward H., eds., Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Reading, MA: Addison-Wesley, 1984).Google Scholar

6 One may question whether turning off an AI system really constitutes punishment. This depends in part upon whether it makes sense to construe AI systems as having interests. Designing systems that have something recognizable as interests is now an active area of AI research, especially in the field of artificial life. Exploring this topic, however, is beyond the scope of this essay. For now I will simply assume that AI research will continue to be successful in developing artificial agents that resemble humans in their decision making, including possession of motivational states.

7 Larry May, in personal discussion, has proposed an alternative perspective. When a human agent becomes intoxicated and begins to behave in irresponsible ways that are difficult to predict, we do not absolve the agent, but hold him or her responsible for becoming intoxicated. Perhaps we should hold AI designers who devise agents which they know will behave in ways they cannot foresee, similarly responsible. The context of constructing AI systems is, I think, significantly different from the context of becoming intoxicated. The systems are created because it is anticipated that they will generate many good outcomes (solving problems better than humans, etc.). The better analogy, I contend, is with giving birth to children. One hopes that one's children will be agents of good, even if, in ways that are currently unpredictable, some will cause great harm. We do not hold parents morally responsible for the actions of their (adult) offspring if they have done their best to provide an appropriate upbringing; neither should we hold AI designers responsible for the artificial agents they create if they have taken due precautions.

8 See Holland, John H., Holyoak, Keith J., Nisbett, Richard E., and Thagard, Paul R., Induction: Processes of Inference, Learning, and Discovery (Cambridge, MA: MIT Press, 1986).Google Scholar

9 During the 1950s and 1960s, a number of researchers actively pursued the alternative connectionist or neural network strategy for creating artificial intelligent agents. See Rosenblatt, Frank, The Principles of Neurodynamics (New York: Spartan, 1962)Google Scholar; and Selfridge, Oliver G. and Neisser, Ulric, “Pattern Recognition by Machine,” Scientific American, vol. 203 (08 1960), pp. 6068.CrossRefGoogle Scholar A critical analysis of the limitations of such systems by Minsky, Marvin and Papert, Seymour, Perceptrons (Cambridge, MA: MIT Press, 1969)Google Scholar, helped to reduce interest in this approach. For a humorous recounting of the fall and reemergence of the neural network alternative, see Papert, Seymour, “One AI or Many?Daedalus, vol. 117 (1988), pp. 114.Google Scholar

10 Perhaps the seminal event in the reemergence of such models was the publication of Rumelhart, David E., McClelland, James L., and the PDF Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations (Cambridge, MA: MIT Press, 1986)Google Scholar; and McClelland, James L., Rumelhart, David E., and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 2: Psychological and Biological Models (Cambridge, MA: MIT Press, 1986).Google Scholar As their titles indicate, Rumelhart and McClelland's preferred term for these models is parallel distributed processing models. For an elementary exposition of these models and their application to modeling cognition, see Bechtel, William and Abrahamsen, Adele A., Connectionism and the Mind: An Introduction to Parallel Processing in Networks (Oxford: Basil Blackwell, 1991).Google Scholar

11 Bechtel, William, “Attributing Responsibility to Computer Systems,” Metaphilosophy, vol. 16 (1984), pp. 296306.CrossRefGoogle Scholar

12 Dennett, Daniel C., “Intentional Systems,” Journal of Philosophy, vol. 68 (1971), pp. 87106.CrossRefGoogle Scholar

13 Franz Brentano introduced the use of the term “intentionality” to refer to the fact that mental states are about things. He emphasized that this relation was unlike ordinary extensional relations since one could have mental states whose content did not exist. See Brentano, Franz, Psychology from an Empirical Standpoint [1874], trans. Rancurello, A. C., Terrell, D. B., and McAlister, L. L. (New York: Humanities Press, 1973).Google Scholar

14 See Fodor, Jerry A., The Language of Thought (New York: Crowell, 1975)Google Scholar, and Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: MIT Press, 1987).Google Scholar

15 Searle, John R., “Minds, Brains, and Programs,” Behavioral and Brain Sciences, vol. 3 (1980) pp. 417–24.CrossRefGoogle Scholar I have argued elsewhere that Searle's argument that we, and not AI systems (even those with robotic bodies that seem to satisfy the conditions I set out below), enjoy intrinsic intentionality rests on an illusion stemming from our use of language: with language we acquire the possibility of meta-representations that allow us to characterize the content of our mental states. This makes it seem as if, whenever we are in a mental state, its contents are directly presented to us and we know directly what they are about (part of Searle's argument that we and not AI systems have intrinsic intentionality). I contend, rather, that we have more than one representational system, and are able to use one to specify the contents of the other. See Bechtel, William and Abrahamsen, Adele A., “Connectionism and the Future of Folk Psychology,” in Natural and Artificial Minds, ed. Burton, Robert G. (Albany, NY: SUNY Press, 1993), pp. 69100.Google Scholar

16 The origins of eliminative materialism are found in the work of philosophers such as Feyerabend, Paul K., “Materialism and the Mind-Body Problem,” Review of Metaphysics, vol. 17 (1963), pp. 4967Google Scholar; and Rorty, Richard, “Mind-Body Identity, Privacy, and Categories,” Review of Metaphysics, vol. 19 (1965), pp. 2454.Google Scholar The most vociferous contemporary statements of eliminative materialism are by Churchland, Patricia S., Neurophilosophy: Towards a Unified Science of the Mind-Brain (Cambridge, MA: MIT Press, 1986)Google Scholar; and Churchland, Paul M., A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (Cambridge, MA: MIT Press, 1989).Google Scholar A variant of the eliminative materialist position is found in Stich, Stephen P., From Folk Psychology to Cognitive Science (Cambridge, MA: MIT Press, 1983).Google Scholar

17 For a review of the philosophical arguments for and against functionalism, see Bechtel, William, Philosophy of Mind: An Overview for Cognitive Science (Hillsdale, NJ: Lawrence Erlbaum, 1988).Google Scholar

18 For contemporary reviews of cognitive psychology, see Barsalou, Lawrence, Cognitive Psychology (Hillsdale, NJ: Lawrence Erlbaum, 1994)Google Scholar; and Anderson, John R., Cognitive Psychology and Its Applications, 3d ed. (San Francisco: Freeman, 1990).Google Scholar

19 Pylyshyn, Zenon W., Computation and Cognition: Toward a Foundation for Cognitive Science (Cambridge, MA: MIT Press, 1984).Google Scholar

20 Newell, Allen and Simon, Herbert A., Human Problem Solving (Englewood Cliffs, NJ: Prentice-Hall, 1972).Google Scholar

21 Langley, Patrick, Simon, Herbert A., Bradshaw, Gary L., and Zytkow, J. M., Scientific Discovery: Computational Explorations of the Creative Processes (Cambridge, MA: MIT Press, 1987).Google Scholar

22 For a detailed development of this claim that does not itself invoke neural networks, see Margolis, Howard, Patterns, Thinking, and Cognition (Chicago: University of Chicago Press, 1987).Google Scholar

23 This network is described more fully in Bechtel, William, “Natural Deduction in Connectionist Systems,” Synthese, vol. 101, no. 3 (1994), pp. 433–63.CrossRefGoogle Scholar

24 This distinction is due to Ryle, Gilbert, The Concept of Mind (New York: Barnes and Noble, 1949).Google Scholar

25 Ramsey, William, Stich, Stephen P., and Garon, J., “Connectionism, Eliminativism, and the Future of Folk Psychology,” Philosophical Perspectives, vol. 4 (1990), pp. 499533.CrossRefGoogle Scholar

26 Hinton, Geoffrey E., “Learning Distributed Representations of Concepts,” Proceedings of the Eighth Annual Conference of the Cognitive Science Society (Hillsdale, NJ: Lawrence Erlbaum, 1986), pp. 112Google Scholar; and van Gelder, Timothy, “What is the ‘D’ in ‘PDP’? A Survey of the Concept of Distribution,” in Philosophy and Connectionist Theory, ed. Ramsey, William, Stich, Stephen P., and Rumelhart, David E. (Hillsdale, NJ: Lawrence Erlbaum, 1991), pp. 3359.Google Scholar

27 Churchland, , A Neurocomputational Perspective (supra note 16).Google Scholar

28 For the initial challenge to the classical view of concepts, see Rosch, Eleanor and Mervis, Carolyn B., “Family Resemblances: Studies in the Internal Structure of Categories,” Cognitive Psychology, vol. 7 (1975), pp. 573605CrossRefGoogle Scholar. Rosch and Mervis focused primarily on establishing that categories had a “prototype structure” (that is, that members of a category were ranked from more prototypical to less prototypical), not on the mechanism by which people made assignments to categories. For a review of subsequent psychological research which has construed comparison to prototypes as the basis of categorization (as well as an alternative view which construes comparison to multiple exemplars or actual members of a category as the basis), see Barsalou, , Cognitive Psychology (supra note 18)Google Scholar; and Smith, Edward E., “Categorization,” in Invitation to Cognitive Science, vol. 3: Thinking, ed. Osherson, Daniel N. and Lasnik, Howard (Cambridge, MA: MIT Press, 1990), pp. 3353.Google Scholar

29 Churchland, Paul M., “The Neural Representation of the Social World,” in Minds and Morals, ed. May, L., Friedman, M., and Clark, A. (Cambridge, MA: MIT Press, 1995).Google Scholar

30 See Bechtel, and Abrahamsen, , “Connectionism and the Future of Folk Psychology” (supra note 15).Google Scholar

31 For an explication of logical behaviorism, see Bechtel, , Philosophy of Mind (supra note 17), ch. 2.Google Scholar

32 For an example, see Bechtel, William and Richardson, Robert C., Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research (Princeton, NJ: Princeton University Press, 1993), ch. 8.Google Scholar

33 See Rumelhart, David E. and McClelland, James L., “On Learning the Past Tense of English Verbs,” in Parallel Distributed Processing, vol. 2 (supra note 10), ch. 18Google Scholar; and Plunkett, Kim and Marchman, Virginia, “U-shaped Learning and Frequency Effects in a Multilayered Perceptron,” Cognition, vol. 38 (1991), pp. 160.CrossRefGoogle Scholar

34 See Hinton, Geoffrey E. and Shallice, Timothy, “Lesioning an Attractor Network: Investigations of Acquired Dyslexia,” Psychological Review, vol. 98 (1991), pp. 7495.CrossRefGoogle ScholarPubMed

35 See John, Mark F. St. and McClelland, James L., “Learning and Applying Contextual Constraints in Sentence Comprehension,” Artificial Intelligence, vol. 46 (1990), pp. 217–57CrossRefGoogle Scholar; and Miikkulainen, Risto, Subsymbolic Natural Language Processing: An Integrated Model of Scripts, Lexicon, and Memory (Cambridge, MA: MIT Press, 1993).Google Scholar

36 This approach of embodying neural networks in robots has been pursued by several researchers. In some of the most interesting work, it has been coupled with a procedure for evolving new network architectures through application of the genetic algorithm. (The genetic algorithm is a procedure for revising computer code by a process of random variation and selective retention of improved variants.) See Nolfi, S., Elman, Jeffrey L., and Parisi, D., “Learning and Evolution in Neural Networks,” Adaptive Behavior, vol. 3, no. 1 (1994), pp. 528CrossRefGoogle Scholar; and Nolfi, S., Miglino, O., and Parisi, D., “Phenotypic Plasticity in Evolving Neural Networks,” Proceedings of the First Conference from Perception to Action, ed. Gaussier, D. P. and Nicoud, J. D. (Los Alamitos, CA: IEEE Press, 1994).Google Scholar I have argued for the importance of such approaches in creating networks with genuine intentionality in Bechtel, William, “The Case for Connectionism,” Philosophical Studies, vol. 71 (1993), pp. 119–54.CrossRefGoogle Scholar

37 See Fetzer, James, Artificial Intelligence: Its Scope and Limits (Dordrecht: Kluwer, 1990)CrossRefGoogle Scholar; and Peirce, Charles Sanders, “Speculative Grammar,” in Hartshorne, Charles and Weiss, Paul, eds., Collected Papers of Charles Sanders Peirce, vol. 2, Elements of Logic (Cambridge: Harvard University Press, 1960).Google Scholar

38 Savage-Rumbaugh, E. Sue, Ape Language: From Conditioned Response to Symbol (New York: Columbia University Press, 1986).Google Scholar

39 The importance of these contrast relations is stressed by Deacon, Terrence W., Symbolic Origins (New York: W. W. Norton, forthcoming).Google Scholar

40 This process is known as protocol analysis. See Ericsson, K. Anders and Simon, Herbert A., Protocol Analysis: Verbal Reports as Data (Cambridge, MA: MIT Press, 1984).Google Scholar

41 See Bechtel, and Richardson, , Discovering Complexity (supra note 32).Google Scholar

42 For a discussion of the relative merits of cluster analysis and principal-components analysis, and a detailed example of using principal-components analysis to understand the behavior of a network, see Elman, Jeffrey L., “Distributed Representations, Simple Recurrent Networks, and Grammatical Structure,” Machine Learning, vol. 7 (1991), pp. 195225.CrossRefGoogle Scholar

43 Davidson, Donald, “Rational Animals,” Dialectica, vol. 36 (1982), pp. 318–27.CrossRefGoogle Scholar

44 See Bechtel, William, “Biological and Social Constraints on Cognitive Processing: The Need for Dynamical Interactions between Levels of Inquiry,” Canadian Journal of Philosophy, Supplementary Volume 20 (1994), pp. 133–64.CrossRefGoogle Scholar

45 Nisbett, Richard E. and Wilson, Timothy D., “Telling More Than We Can Know: Verbal Reports on Mental Processes,” Psychological Review, vol. 84 (1974), pp. 231–59.CrossRefGoogle Scholar

46 Vygotsky, Lev S., Thought and Language (Cambridge, MA: MIT Press, 1934, 1962).Google Scholar

47 Newell, and Simon, , Human Problem Solving (supra note 20).Google Scholar See also Ericsson, and Simon, , Protocol Analysis (supra note 40).Google Scholar