To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Humans have two kinds of beliefs, intuitive beliefs and reflective beliefs. Intuitive beliefs are a fundamental category of cognition, defined in the architecture of the mind. They are formulated in an intuitive mental lexicon. Humans are also capable of entertaining an indefinite variety of higher-order or "reflective" propositional attitudes, many of which are of a credal sort. Reasons to hold reflective beliefs are provided by other beliefs that describe the source of the reflective belief as reliable, or that provide an explicit argument in favour of the reflective belief. The mental lexicon of reflective beliefs includes not only intuitive, but also reflective concepts.
I argue, with examples, that most human cognitive skills are neither instincts nor gadgets but mechanisms shaped both by evolved dispositions and by cultural inputs. This shaping can work either through evolved skills fulfilling their function with the help of cultural skills that they contribute to shape, or through cultural skills recruiting evolved skills and adjusting to them.
The editors of the volume asked me to provide a broad overview of the beginnings of relevance theory back in the 1970s, how it has developed over the decades and where I see it moving in the future, reflecting in the process on the collective work that Deirdre Wilson and I initiated and that has been joined and considerably enriched by many others. Here are some personal notes to help address these questions.
Boyer & Petersen (B&P) assume that the intuitive systems underlying folk-economic beliefs (FEBs), and, in particular, emporiophobia, evolved in the environment of evolutionary adaptedness (EEA), before markets. This makes the historical development of markets puzzling. We suggest that what evolved in the EEA are templates that help children develop intuitive systems partly adjusted to their cultural environment. This helps resolve the puzzle.
We suggest that preschoolers’ frequent obliviousness to the risks and opportunities of deception comes from a trusting stance supporting verbal communication. Three studies (N = 125) confirm this hypothesis. Three-year-olds can hide information from others (Study 1) and they can lie (Study 2) in simple settings. Yet when one introduces the possibility of informing others in the very same settings, three-year-olds tend to be honest (Studies 1 and 2). Similarly, four-year-olds, though capable of treating assertions as false, trust deceptive informants (Study 3). We suggest that children's reduced sensitivity to the opportunities of lying, and to the risks of being lied to might help explain their difficulties on standard false belief tasks.
As Kline envisages, there is an important relationship between cultural attraction and teaching. The very function of teaching is to make the content taught an attractor. Teaching, moreover, typically fulfills its function by exploiting a variety of factors of cultural attraction that help make its content learnable and teachable.
In ‘The Evolution of Testimony: Receiver Vigilance, Speaker Honesty and the Reliability of Communication,’ Kourken Michaelian questions the basic tenets of our article ‘Epistemic Vigilance’ (Sperber et al. 2010). Here I defend against Michaelian's criticisms the view that epistemic vigilance plays a major role in explaining the evolutionary stability of communication and that the honesty of speakers and the reliability of their testimony are, to a large extent, an effect of hearers' vigilance.
What makes humans moral beings? This question can be understood either as a proximate “how” question or as an ultimate “why” question. The “how” question is about the mental and social mechanisms that produce moral judgments and interactions, and has been investigated by psychologists and social scientists. The “why” question is about the fitness consequences that explain why humans have morality, and has been discussed by evolutionary biologists in the context of the evolution of cooperation. Our goal here is to contribute to a fruitful articulation of such proximate and ultimate explanations of human morality. We develop an approach to morality as an adaptation to an environment in which individuals were in competition to be chosen and recruited in mutually advantageous cooperative interactions. In this environment, the best strategy is to treat others with impartiality and to share the costs and benefits of cooperation equally. Those who offer less than others will be left out of cooperation; conversely, those who offer more will be exploited by their partners. In line with this mutualistic approach, the study of a range of economic games involving property rights, collective actions, mutual help and punishment shows that participants' distributions aim at sharing the costs and benefits of interactions in an impartial way. In particular, the distribution of resources is influenced by effort and talent, and the perception of each participant's rights on the resources to be distributed.
Our discussion of the commentaries begins, at the evolutionary level, with issues raised by our account of the evolution of morality in terms of partner-choice mutualism. We then turn to the cognitive level and the characterization and workings of fairness. In a final section, we discuss the degree to which our fairness-based approach to morality extends to norms that are commonly considered moral even though they are distinct from fairness.
When Mary speaks to Peter, she has in mind a certain meaning that she intends to convey: say, that the plumber she just called is on his way. To convey this meaning, she utters certain words: say, ‘He will arrive in a minute’. What is the relation between Mary’s intended meaning and the linguistic meaning of her utterance? A simple (indeed simplistic) view is that for every intended meaning there is a sentence with an identical linguistic meaning, so that conveying a meaning is just a matter of encoding it into a matching verbal form, which the hearer decodes back into the corresponding linguistic meaning. But this is not what happens, at least in practice. There are always components of a speaker’s meaning which her words do not encode: for instance, the English word ‘he’ does not specifically refer to the plumber Mary is talking about. Indeed, we would argue that the idea that for most, if not all, possible meanings that a speaker might intend to convey, there is a sentence in a natural language which has that exact meaning as its linguistic meaning is quite implausible.
An apparently more realistic view is that the speaker typically produces an utterance which encodes some, but not all, of her meaning. Certain components of her meaning – in Mary’s utterance the referent of ‘he’ or the place where ‘he’ will arrive, for instance – are not encoded, and have to be inferred by the hearer; so while it might seem that a speaker’s meaning should in principle be fully encodable, attempts to achieve such a full encoding in practice leave an unencoded, and perhaps unencodable, residue.
How are non-declarative sentences understood? How do they differ semantically from their declarative counterparts? Answers to these questions once made direct appeal to the notion of illocutionary force. When they proved unsatisfactory, the fault was diagnosed as a failure to distinguish properly between mood and force. For some years now, efforts have been under way to develop a satisfactory account of the semantics of mood. In this chapter, we consider the current achievements and future prospects of the mood-based semantic programme.
Distinguishing mood and force
Early speech-act theorists regarded illocutionary force as a properly semantic category. Sentence meaning was identified with illocutionary-force potential: to give the meaning of a sentence was to specify the range of speech acts that an utterance of that sentence could be used to perform. Typically, declarative sentences were seen as linked to the performance of assertive speech acts (committing the speaker to the truth of the proposition expressed), while imperative and interrogative sentences were linked to the performance of directive speech acts (requesting action and information, respectively). Within this framework, pragmatics, the theory of utterance interpretation, had at most the supplementary role of explaining how hearers, in context, choose an actual illocutionary force from among the potential illocutionary forces semantically assigned to the sentence uttered.
The student of rhetoric is faced with a paradox and a dilemma. We will suggest a solution to the dilemma, but this will only make the paradox more blatant.
Let us start with the paradox. Rhetoric took pride of place in formal education for two and a half millennia. Its very rich and complex history is worth detailed study, but it can be summarised in a few sentences. Essentially the same substance was passed on by eighty generations of teachers to eighty generations of pupils. If there was a general tendency, it consisted merely in a narrowing of the subject matter of rhetoric: one of its five branches, elocutio, the study of figures of speech, gradually displaced the others, and in some schools, became identified with rhetoric tout court. (We will also be guilty of this and several other simplifications.) The narrowing was not even offset by a corresponding increase in theoretical depth. Pierre Fontanier’s Les Figures du Discours is not a radical improvement on Quintilian’s Institutio Oratoria, despite the work of sixty generations of scholars in between.
Here are a couple of apparent platitudes. As speakers, we expect what we say to be accepted as true. As hearers, we expect what is said to us to be true. If it were not for these expectations, if they were not often enough satisfied, there would be little point in communicating at all. David Lewis (who has proposed a convention of truthfulness) and Paul Grice (who has argued for maxims of truthfulness), among others, have explored some of the consequences of these apparent platitudes. We want to take a different line and argue that they are strictly speaking false. Of course hearers expect to be informed and not misled by what is communicated; but what is communicated is not the same as what is said. We will argue that language use is not governed by any convention or maxim of truthfulness in what is said. Whatever genuine facts such a convention or maxim was supposed to explain are better explained by assuming that communication is governed by a principle of relevance.
According to David Lewis (1975), there is a regularity (and a moral obligation) of truthfulness in linguistic behaviour. This is not a convention in Lewis’s sense, since there is no alternative regularity which would be preferable as long as everyone conformed to it. However, for any language £ of a population P, Lewis argues that there is a convention of truthfulness and trust in £ (an alternative being a convention of truthfulness and trust in some other language £´):
My proposal is that the convention whereby a population P uses a language £ is a convention of truthfulness and trust in £. To be truthful in £ is to act in a certain way: to try never to utter any sentences of £ that are not true in £. Thus it is to avoid uttering any sentence of £ unless one believes it to be true in £. To be trusting in £ is to form beliefs in a certain way: to impute truthfulness in £ to others, and thus to tend to respond to another’s utterance of any sentence of £ by coming to believe that the uttered sentence is true in £.
Pragmatics is often described as the study of language use, as opposed to language structure. In this broad sense, it covers a variety of loosely related research programmes ranging from formal studies of deictic expressions to sociological studies of ethnic verbal stereotypes. In a more focused sense – the one we will use here – pragmatics contrasts with semantics, the study of linguistic meaning, and is the study of how contextual factors interact with linguistic meaning in the interpretation of utterances. Here we will briefly highlight a range of closely related, fairly central pragmatic issues and approaches that have been of interest to linguists and philosophers of language in the past thirty years or so. Pragmatics, as we will describe it, is an empirical science, but one with philosophical origins and philosophical import.
References to pragmatics are found in philosophy since the work of Charles Morris (1938), who defined it as the study of the relations between signs and their interpreters. However, it was the philosopher Paul Grice’s William James Lectures at Harvard in 1967 that led to the real development of the field. Grice introduced new conceptual tools – in particular the notion of implicature – in an attempt to reconcile the concerns of the two then dominant approaches to the philosophy of language, Ideal Language Philosophy and Ordinary Language Philosophy (on the philosophical origins of pragmatics, see Recanati 1987, 1998, 2004a, 2004b). Ideal language philosophers in the tradition of Frege, Russell, Carnap and Tarski were studying language as a formal system. Ordinary language philosophers in the tradition of the later Wittgenstein, Austin and Strawson were studying actual linguistic usage, highlighting in descriptive terms the complexity and subtlety of meanings and the variety of forms of verbal communication. For ordinary language philosophers, there was an unbridgeable gap between the semantics of formal and natural languages. Grice showed that the gap could at least be reduced by drawing a sharp distinction between sentence meaning and speaker’s meaning, and explaining how relatively simple and schematic linguistic meanings could be used in context to convey richer and fuzzier speaker’s meanings, consisting not only of what was said, but also of what was implicated. This became the foundation for most of modern pragmatics.