To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The property generation task (i.e. “feature listing”) is often assumed to measure concepts. Typically, researchers assume implicitly that the underlying representation of a concept consists of amodal propositions, and that verbal responses during property generation reveal their conceptual content. The experiments reported here suggest instead that verbal responses during property generation reflect two alternative sources of information: the linguistic form system and the situated simulation system. In two experiments, properties bearing a linguistic relation to the word for a concept were produced earlier than properties not bearing a linguistic relation, suggesting the early properties tend to originate in a word association process. Conversely, properties produced later tended to describe objects and situations, suggesting that late properties tend to originate from describing situated simulations. A companion neuroimaging experiment reported elsewhere confirms that early properties originate in language areas, whereas later properties originate in situated simulation areas. Together, these results, along with other results in the literature, indicate that property generation is a relatively complex process, drawing on at least two systems somewhat asynchronously.
Grounded cognition offers a natural approach for integrating Bayesian accounts of optimality with mechanistic accounts of cognition, the brain, the body, the physical environment, and the social environment. The constructs of simulator and situated conceptualization illustrate how Bayesian priors and likelihoods arise naturally in grounded mechanisms to predict and control situated action.
Philosophical interest in situated cognition has been focused most intensely on the claim that human cognitive processes extend from the brain into the tools humans use. Coupling arguments are far and away the primary sort of argument given in support of transcranialism. What is common to these arguments is a tacit move from the observation that process X is in some way causally connected (coupled) to a cognitive process Y to the conclusion that X is part of the cognitive process Y. Transcranialism is regularly backed by some form of coupling-constitution fallacy and that it does not have an adequate account of the difference between the cognitive and the noncognitive. A more nagging worry is the motivation for transcranialism. The difference explains why even transcranialists maintain that cognition extends from brains into the extraorganismal world rather than from the extraorganismal world into brains.
A central theme of modern cognitive science is that symbolic interpretation underlies human intelligence. The human brain does not simply register images, as do cameras or other recording devices. A collection of images or recordings does not make a system intelligent. Instead, symbolic interpretation of image content is essential for intelligent activity.
What cognitive operations underlie symbolic interpretation? Across decades of analysis, a consistent set of symbolic operations has arisen repeatedly in logic and knowledge engineering: binding types to tokens; binding arguments to values; drawing inductive inferences from category knowledge; predicating properties and relations of individuals; combining symbols to form complex symbolic expressions; representing abstract concepts that interpret metacognitive states. It is difficult to imagine performing intelligent computation without these operations. For this reason, many theorists have argued that symbolic operations are central, not only to artificial intelligence but to human intelligence (e.g., Fodor, 1975; Pylyshyn, 1973).
Symbolic operations provide an intelligent system with considerable power for interpreting its experience. Using type-token binding, an intelligent system can place individual components of an image into familiar categories (e.g., categorizing components of an image as people and cars). Operations on these categories then provide rich inferential knowledge that allows the perceiver to predict how categorized individuals will behave, and to select effective actions that can be taken (e.g., a perceived person may talk, cars can be driven).
Roughly speaking, an abstract concept refers to entities that are neither purely physical nor spatially constrained. Such concepts pose a classic problem for theories that ground knowledge in modality-specific systems (e.g., Barsalou, 1999, 2003a,b). How could these systems represent a concept like TRUTH? Abstract concepts also pose a significant problem for traditional theories that represent knowledge with amodal symbols. Surprisingly, few researchers have attempted to specify the content of abstract concepts using feature lists, semantic networks, or frames. It is not enough to say that an amodal node or a pattern of amodal units represents an abstract concept. It is first necessary to specify the concept's content, and then to show that a particular type of representation can express it. Regardless of how one might go about representing TRUTH, its content must be identified. Then the task of identifying how this content is represented can begin.
The primary purpose of this chapter is to explore the content of three abstract concepts: TRUTH, FREEDOM, and INVENTION. In an exploratory study, their content will be compared to the content of three concrete concepts – BIRD, CAR, and SOFA – and also to three intermediate concepts that seem somewhat concrete but more abstract than typical concrete concepts – COOKING, FARMING, and CARPETING. We will first ask participants to produce properties typically true of these concepts. We will then analyze these properties using two coding schemes.
Prior to the twentieth century, theories of knowledge were
inherently perceptual. Since then, developments in logic, statistics,
and programming languages have inspired amodal theories that rest on
principles fundamentally different from those underlying perception.
In addition, perceptual approaches have become widely viewed as
untenable because they are assumed to implement recording systems, not
conceptual systems. A perceptual theory of knowledge is developed here
in the context of current cognitive science and neuroscience. During
perceptual experience, association areas in the brain capture bottom-up
patterns of activation in sensory-motor areas. Later, in a top-down
manner, association areas partially reactivate sensory-motor areas to
implement perceptual symbols. The storage and reactivation of perceptual
symbols operates at the level of perceptual components – not at
the level of holistic perceptual experiences. Through the use of
selective attention, schematic representations of perceptual components
are extracted from experience and stored in memory (e.g., individual
memories of green, purr, hot). As memories of the same
component become organized around a common frame, they implement
a simulator that produces limitless simulations of the component
(e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects
of proprioception (e.g., lift,run) and introspection
(e.g., compare,memory,happy, hungry). Once
established, these simulators implement a basic conceptual system that
represents types, supports categorization, and produces categorical
inferences. These simulators further support productivity, propositions,
and abstract concepts, thereby implementing a fully functional
conceptual system. Productivity results from integrating simulators
combinatorially and recursively to produce complex simulations.
Propositions result from binding simulators to perceived individuals
to represent type-token relations. Abstract concepts are grounded in
complex simulations of combined physical and introspective events.
Thus, a perceptual theory of knowledge can implement a fully functional
conceptual system while avoiding problems associated with amodal symbol
systems. Implications for cognition, neuroscience, evolution,
development, and artificial intelligence are explored.
Various defenses of amodal symbol systems are addressed,
including amodal symbols in sensory-motor areas, the causal theory
of concepts, supramodal concepts, latent semantic analysis, and
abstracted amodal symbols. Various aspects of perceptual symbol
systems are clarified and developed, including perception, features,
simulators, category structure, frames, analogy, introspection,
situated action, and development. Particular attention is given
to abstract concepts, language, and computational mechanisms.
Contrary to prevailing views, productivity and
propositional construal are not problematic for
perceptual views of representation. Glenberg's embodied
representations contribute to our understanding of how these two
important processes might be implemented perceptually.
A permanently existing “idea” or “Vorstellung” which makes its appearance before the footlights of consciousness at periodic intervals, is as mythological an entity as the Jack of Spades.
William James, 1890/1950, p. 236
A central goal of cognitive science is to characterize the knowledge that underlies human intelligence. Many investigators have expended much effort toward this aim and in the process have proposed a variety of knowledge structures as the basic units of human knowledge, including definitions, prototypes, exemplars, frames, schemata, scripts, and mental models. An implicit assumption in much of this work is that knowledge structures are stable: Knowledge structures are stored in long-term memory as discrete and relatively static sets of information; they are retrieved intact when relevant to current processing; different members of a population use the same basic structures; and a given individual uses the same structures across contexts. These intuitions of stability are often compelling, and it is sometimes hard to imagine how we could communicate or perform other intelligent behaviors without stable knowledge structures.
But perhaps it is important to consider the issue of stability more explicitly. Are there stable knowledge structures in long-term memory? If so, are they retrieved as static units when relevant to current processing? Do different individuals represent a given category in the same way? Does a given individual represent a category the same way across contexts? Whatever conclusions we reach should have important implications for theories of human cognition and for attempts to implement increasingly powerful forms of machine intelligence.
As evidenced by many of the chapters in this volume, as well as in Rubin (1987), cognitive psychologists have become increasingly interested in the study of autobiographical memories. But because this development is relatively recent, it understandably exhibits certain gaps and weaknesses. Although numerous experiments have addressed the retention of autobiographical memories, relatively few have addressed the content of autobiographical memories, how they are organized, or how they are related to world knowledge. Although a fair amount of empirical work has addressed autobiographical memories, no major theories have been proposed to account for them or to integrate them with other phenomena such as comprehension, learning, and problem solving.
A benefit of the cognitive science atmosphere that has grown with the development of cognitive psychology is that diverse methodological and theoretical frameworks contribute to one another's development. Insights from one approach fill gaps, stimulate new research, and occasionally restructure another approach. This chapter reflects such cross-fertilization. My initial interest in autobiographical memories was stimulated by Janet Kolodner's computational theory of autobiographical memories (Kolodner, 1978, 1980, 1983a,b, 1984; Schank & Kolodner, 1979), and our discussions of this work led to some very preliminary attempts to integrate psychological and computational perspectives (Kolodner & Barsalou, 1982, 1983).
In contrast to cognitive psychology, computational work on autobiographical memories has primarily been theoretical and has focused on the content and organization of autobiographical memories, along with their relation to world knowledge.
Email your librarian or administrator to recommend adding this to your organisation's collection.