We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Substrate independence and mind-body functionalism claim that thinking does not depend on any particular kind of physical implementation. But real-world information processing depends on energy, and energy depends on material substrates. Biological evidence for these claims comes from ecology and neuroscience, while computational evidence comes from neuromorphic computing and deep learning. Attention to energy requirements undermines the use of substrate independence to support claims about the feasibility of artificial intelligence, the moral standing of robots, the possibility that we may be living in a computer simulation, the plausibility of transferring minds into computers, and the autonomy of psychology from neuroscience.
This chapter reviews the two main current approaches to cognitive architecture: rule-based systems and connectionism. Both kinds of architecture assume the central hypothesis of cognitive science that thinking consists of the application of computational procedures to mental representations, but they propose very different kinds of representations and procedures. Both rule-based and connectionist architectures have had many successes in explaining important psychological phenomena concerning problem solving, learning, language use, and other kinds of thinking. Given their large and only partially overlapping range of explanatory applications, it seems unlikely that either of the two approaches to cognitive architecture will come to dominate cognitive science. The chapter suggests a reconciliation of the two approaches by means of theoretical neuroscience. Unified understanding of how the brain can perform both serial problem solving using rules and parallel constraint satisfaction using distributed representations will be a major triumph of cognitive science.
This paper uses the economic crisis of 2008 as a case study to examine the explanatory validity of collective mental representations. Distinguished economists such as Paul Krugman and Joseph Stiglitz attribute collective beliefs, desires, intentions, and emotions to organizations such as banks and governments. I argue that the most plausible interpretation of these attributions is that they are metaphorical pointers to a complex of multilevel social, psychological, and neural mechanisms. This interpretation also applies to collective knowledge in science: scientific communities do not literally have collective representations, but social mechanisms do make important contributions to scientific knowledge.
Why did the oxygen theory of combustion supersede the phlogiston theory? Why is Darwin's theory of evolution by natural selection superior to creationism? How can a jury in a murder trial decide between conflicting views of what happened? This target article develops a theory of explanatory coherence that applies to the evaluation of competing hypotheses in cases such as these. The theory is implemented in a connectionist computer program with many interesting properties.
The problem of inference to explanatory hypotheses has a long history in philosophy and a much shorter one in psychology and artificial intelligence (AI). Scientists and philosophers have long considered the evaluation of theories on the basis of their explanatory power. In the late nineteenth century, Peirce discussed two forms of inference to explanatory hypotheses: hypothesis, which involved the acceptance of hypotheses, and abduction, which involved merely the initial formation of hypotheses (Peirce 1931–1958; Thagard 1988a). Researchers in artificial intelligence and some philosophers have used the term “abduction” to refer to both the formation and the evaluation of hypotheses. AI work on this kind of inference has concerned such diverse topics as medical diagnosis (Josephson et al. 1987; Pople 1977; Reggia et al. 1983) and natural language interpretation (Charniak and McDermott 1985; Hobbs et al. 1988). In philosophy, the acceptance of explanatory hypotheses is usually called inference to the best explanation (Harman 1973, 1986). In social psychology, attribution theory considers how people in everyday life form hypotheses to explain events (Fiske and Taylor 1984).
The study of attention is central to understanding how information is processed in cognitive systems. Modern cognitive research interprets attention as the capacity to select and enhance limited aspects of currently processed information. This chapter reviews key computational models and theoretical directions pursued by researchers trying to understand the multifaceted phenomenon of attention. A broad division is drawn between theories and models addressing the mechanisms by which attention modulates specific aspects of perception (primarily visual) and those that have focused on goal-driven and task-oriented components of attention. An area of recent activity in elaborating on the computational mechanisms of goal-driven attention concerns mechanisms by which attentional biases arise or are modulated during the course of task performance. Finally, the chapter focuses on the contrast or continuum between attentional control and automaticity, an issue that becomes crystallized when examining the distinctions between, or transitions from, novice to expert cognitive task performance.
In the 1890s, the great American philosopher C. S. Peirce (1931–1958) used the term “abduction” to refer to a kind of inference that involves the generation and evaluation of explanatory hypotheses. This term is much less familiar today than “deduction,” which applies to inference from premises to a conclusion that has to be true if the premises are true. And it is much less familiar than “induction,” which sometimes refers broadly to any kind of inference that introduces uncertainty, and sometimes refers narrowly to inference from examples to rules, which I will call “inductive generalization.” Abduction is clearly a kind of induction in the broad sense, in that the generation of explanatory hypotheses is fraught with uncertainty. For example, if the sky suddenly turns dark outside my window, I may hypothesize that there is a solar eclipse, but many other explanations are possible, such as the arrival of an intense storm or even a huge spaceship.
Despite its inherent riskiness, abductive inference is an essential part of human mental life. When scientists produce theories that explain their data, they are engaging in abductive inference. For example, psychological theories about mental representations and processing are the result of abductions spurred by the need to explain the results of psychological experiments. In everyday life, abductive inference is ubiquitous, for example when people generate hypotheses to explain the behavior of others, as when I infer that my son is in a bad mood to explain a curt response to a question.
What is the relation between coherence and truth? This paper rejects numerous answers to this question, including the following: truth is coherence; coherence is irrelevant to truth; coherence always leads to truth; coherence leads to probability, which leads to truth. I will argue that coherence of the right kind leads to at least approximate truth. The right kind is explanatory coherence, where explanation consists in describing mechanisms. We can judge that a scientific theory is progressively approximating the truth if it is increasing its explanatory coherence in two key respects: broadening by explaining more phenomena and deepening by investigating layers of mechanisms. I sketch an explanation of why deepening is a good epistemic strategy and discuss the prospect of deepening knowledge in the social sciences and everyday life.
Descartes contended that “I am obliged in the end to admit that none of my former ideas are beyond legitimate doubt” (1964, 64). Accordingly, he adopted a method of doubting everything: “Since my present aim was to give myself up to the pursuit of truth alone, I thought I must do the very opposite, and reject as if absolutely false anything as to which I could imagine the least doubt, in order to see if I should not be left at the end believing something that was absolutely indubitable” (1964, 31). Similarly, other philosophers have raised doubts about the justifiability of beliefs concerning the external world, the existence of other minds, and moral principles; philosophical skepticism has a long history (Popkin 1979).
A biochemical pathway is a sequence of chemical reactions in a biological organism. Such pathways specify mechanisms that explain how cells carry out their major functions by means of molecules and reactions that produce regular changes. Many diseases can be explained by defects in pathways, and new treatments often involve finding drugs that correct those defects. This paper presents explanation schemas and treatment strategies that characterize how thinking about pathways contributes to biomedical discovery. It discusses the significance of pathways for understanding the nature of diseases, explanations, and theories.
Almost all computational models of the mind and brain ignore details about neurotransmitters, hormones, and other molecules. The neglect of neurochemistry in cognitive science would be appropriate if the computational properties of brains relevant to explaining mental functioning were in fact electrical rather than chemical. But there is considerable evidence that chemical complexity really does matter to brain computation, including the role of proteins in intracellular computation, the operations of synapses and neurotransmitters, and the effects of neuromodulators such as hormones. Neurochemical computation has implications for understanding emotions, cognition, and artificial intelligence.
This chapter discusses the cognitive contributions that emotions make to scientific inquiry, including the justification as well as the discovery of hypotheses. James Watson's description of how he and Francis Crick discovered the structure of DNA illustrates how positive and negative emotions contribute to scientific thinking. I conclude that emotions are an essential part of scientific cognition.
Introduction
Since Plato, most philosophers have drawn a sharp line between reason and emotion, assuming that emotions interfere with rationality and have nothing to contribute to good reasoning. In his dialogue the Phaedrus, Plato compared the rational part of the soul to a charioteer who must control his steeds, which correspond to the emotional parts of the soul (Plato, 1961, p. 499). Today, scientists are often taken as the paragons of rationality, and scientific thought is generally assumed to be independent of emotional thinking.
Current research in cognitive science is increasingly challenging the view that emotions and reason are antagonistic to each other, however. Evidence is accumulating in cognitive psychology and neuroscience that emotions and rational thinking are closely intertwined (see, for example: Damasio, 1994; Kahneman, 1999; Panksepp, 1999). My aim in this chapter is to extend that work and describe the role of the emotions in scientific thinking. If even scientific thinking is legitimately emotional, then the traditional division between reason and emotion becomes totally unsupportable.
Explanations of the growth of scientific knowledge can be characterized in terms of logical, cognitive, and social schemas. But cognitive and social schemas are complementary rather than competitive, and purely social explanations of scientific change are as inadequate as purely cognitive explanations. For example, cognitive explanations of the chemical revolution must be supplemented by and combined with social explanations, and social explanations of the rise of the mechanical worldview must be supplemented by and combined with cognitive explanations. Rational appraisal of cognitive and social strategies for improving knowledge should appreciate the interdependence of mind and society.