1 Paul, Horwich ‘Wittgenstein and Kripke on the Nature of Meaning,’ Mind and Language 5 (1990) 105–21; ‘Meaning, Use, and Truth,’ Mind 104 (1995) 355-68; ‘The Nature of Vagueness,’ Philosophy and Phenomenological Research 58 (1997) 929-34; Truth, 2nd ed. (Oxford: Oxford University Press 1998); Meaning (Oxford: Oxford University Press 1998)
2 The correspondence between words and meaning-constituting use regularities will not always be one-to-one. Horwich allows that use regularities sometimes constitute the meanings of pairs or clusters of words, so that while each word in the cluster has a distinct meaning, those meanings are ‘grounded’ in the same use regularity. See Chapters 3 and 9 of his Meaning for detalls.
3 In dubbing this the ‘Classical View,’ I do not mean to suggest that it is an historically popular view. It is not.
4 Williamson, Timothy Vagueness (New York: Routledge 1994); ‘Definiteness and Knowability,’ Southern Journal of Philosophy 33 (1995) 171-91; ‘Wright on the Epistemic Conception of Vagueness,’ Analysis 56 (1996) 39-45; ‘Putnam on the Sorites Paradox,’ Philosophical Papers 25 (1996) 47-56; ‘What Makes it a Heap?’ Erkenntnis 44 (1996) 327-9; ‘Reply to Commentators,’ Philosophy and Phenomenological Research 58 (1997) 945-53. See also Sorensen, R. ‘An Argument for the Vagueness of “Vague”’ Analysis 45 (1985) 134–7; ‘The Vagueness of Knowledge,’ Canadian Journal of Philosophy 17 (1987) 767-804; Blindspots (Oxford: Oxford University Press 1988); ‘The Ambiguity of Vagueness and Precision,’ Pacific Philosophical Quarterly 70 (1989) 174-83; ‘Vagueness Within the Language of Thought,’ Philosophical Quarterly 41 (1991) 389-413; ‘A Thousand Clones,’ Mind 103 (1994) 47-54.
5 Why, in Horwich's view, do borderline contexts induce a paralysis of judgment in Speakers? Because the use regularities governing vague predicates possess ‘a “gappy” character, specifying that the predicate is applied to objects possessing some underlying property to at least a certain specified degree, x, and that its negation is applied to objects possessing that property to less than a certain specified degree, y, where y is less than x, and that neither the predicate nor its negation is applied to objects possessing the property to some degree between x and y’ (Meaning, 64). In a borderline case, the relevant underlying property is present to a degree between x and y, which leads to a paralysis of judgment on the part of Speakers. In Horwich's view, an explanation of vagueness in terms of gappy use regularities is preferable to an explanation in terms of an epistemic f alling external to Speakers (e.g., a reliabilistically construed ignorance of sharp semantic boundaries).
6 Horwich's own account of his commitment to the Classical View is in some respects curious. In his Truth, he Claims that logic and the theory of truth have nothing to do with one another: ‘[Minimalism] is the proper conception of truth even in the context of deviant logics such as intuitionism or quantum logic, and would not be undermined by any arguments demonstrating the preferability of non-classical rules of inference’ (74). This would seem to free Horwich of any commitment to the Classical View, since (e.g.) intuitionism denies the law of excluded middle. But Horwich then proceeds to argue that, given the equivalence Schemata for truth and falsity ('<p>is true iff p’ and ‘<p> is false iff not-p’), we cannot deny excluded middle except on pain of contradiction. In a clarificatory vein, Horwich notes that these results concerning excluded middle ‘do not derive solely from the minimal theory of truth, but depend also on our having defined falsity as the absence of truth’ (Truth, 77). But this remark clarifies little; for the definition of falsity as the absence of truth stems directly from the equivalence Schemata which form the core of Horwich's minimalism, together with the minimalist claim that those Schemata exhaust the concepts of truth and falsity.
7 As when he says, ‘Given that an utterance says that TW is thin, what it takes for it to be true is just for TW to be thin, and what it takes for it to be false is for TW not to be thin. No more and no less is required. To put the condition for truth or falsity any higher or lower would be to misconceive the nature of truth or falsity’ (Vagueness, 190).
8 For fear of leaving some readers high and dry, I offer the following very crude Synopsis of Horwich's position. According to Horwich's version of deflationism, there can be no explanation of what the truth or falsity of a proposition consists in over and above what is stated in such equivalence Schemata as ‘<p>is true iff p’ and ‘<p> is false iff not-p.’ These Schemata are supposed to be exhaustive of our concept of truth and they make no provision for departures from classical logic. Hence Horwich, qua deflationist, finds it plausible to deny that there are (e.g.) truth value gaps or fallures of excluded middle.
As for the relationship between deflationism and the use theory of meaning, it is arguably impossible to be a deflationist about (e.g.) truth without also being a deflationist about other fundamental semantic concepts, such as meaning and reference. These concepts form a tightly woven fabric and are arguably interdefinable. For this reason, the deflationist about truth is apt to gravitate toward a deflationary approach to the theory of meaning, and in Horwich's view a use theory of meaning is the perfect vehicle for such an approach; for a use theory is consistent with the view that ‘the basic use regularities of different words — like different laws of nature — need have no common form; and they need not relate the words they govern to the members of their extensions’ (Meaning, 113).
9 For a good, general discussion of deflationary theories of truth and of deflationism in semantics more generally, see the introduction to Blackburn, S. and Simmons, K. eds., Truth (Oxford: Oxford University Press 1999). Also, see chapter 4 of Horwich's Meaning for an explanation of how a deflationary theory of truth ‘helps to dissolve a certain problem regarding aboutness — the notorious problem of intentionality — and thereby puts us in a position to discern the nature of meaning’ (103).
10 Like Horwich and Williamson, I will use ‘vague’ and ‘indeterminate’ interchangeably.
11 Of course, I am also assuming that any formal semantic treatment of vagueness will involve a non-classical logic. This too is a common and, in my view, plausible assumption; but I will not attempt to defend it here.
12 My discussion will focus almost exclusively on Horwich's version of the use theory. There are several reasons for this. First, there are many possible use theories in logical space, and I cannot address all of them. Second, Horwich's use theory captures, at least in my view, all of the historically essential features of a use theory of meaning. Third, Horwich's development of the use theory is probably the most sophisticated one currently to be had. Fourth, Horwich argues persuasively that his use theory is consistent with the Classical View. So, I focus on Horwich's use theory not because it is a frall specimen of the species but, on the contrary, because it is exceptionally robust.
13 Conditions (i) and (ii) are meant to hold so long as context is held fixed. The assumption of a fixed context will hold for all of the Claims and examples I discuss in this paper, so I will not always make it explicit.
14 Of course, to prefer one characterization of ‘borderline case’ is not necessarily to oppose others. For example, while Williamson typically describes borderline cases in terms of indecisive rather than conflicting patterns of use, he does not seem hostile to other characterizations. This is plain when he says, ‘Some writers on vagueness, such as C.S. Peirce and Crispin Wright, characterize borderline cases in terms of the conflict rather than absence of opinion. Neither is essential; what matters is the absence of knowledge’ (’Reply,’ 946).
15 Suppose someone suggested that, if only we employed a richer, more sophisticated psychological idiom, then we would see that there is a unique, best psychological explanation of every instance of human behavior. While most everyone allows that such an idiom would sometimes assist us in the task of explaining behavior, few would find the Suggestion palatable. For taking up a more sophisticated idiom might be no help at all and might make matters worse by greatly multiplying the number of equally matched explanations from which we are to choose. Such are the vagaries of explanation, including not only psychological but also use theoretic explanation.
16 My discussion of the Heap Case was improved by comments from two anonymous referees.
17 There is no reason to complain that my invocation of ‘facts’ here is inflationary. I do not have a theoretically ߢloaded’ notion of fact in mind, and the deflationist is perfectly at home with the idea that true sentences (or propositions, if you prefer) ‘owe their truth’ to the facts. It is even consistent with deflationism to say that ‘Grass is green’ is true because grass is green, or to say that ‘Grass is green’ is false because grass is not green. What the deflationist denies is the philosophical thesis that the truth of a sentence is in general constituted by its bearing some well-defined, explanatory relation to ‘the facts.’ Horwich makes this point by remarking that his minimalism ‘does not deny that truths do correspond — in some sense — to the facts; it acknowledges that Statements owe their truth to the nature of reality; and it does not dispute the existence of relationships between truth, reference, and predicate satisfaction’ (Truth, 105).
18 The detalls of strong indeterminacy I am happy to leave open. If it is strongly indeterminate that p, this might be taken to mean that p is neither true nor false; yet strong indeterminacy might be taken to be a primitive notion on a par with truth and falsity. Thus, its being strongly indeterminate that p might be taken to mean that there is simply no true answer to the question whether p is true (or false) that does not invoke the notion of strong indeterminacy, including the answer that p is neither true nor false. Strong indeterminacy can be construed in other ways as well, but this is not my concern here. Indeed, it is consistent with the argument of this paper that the notion of strong indeterminacy is ultimately unintelligible. My claim is merely that it is a notion indispensable to the use theorist.
19 In other words, the use theorist is forced to fall back either on a System of logic that denies bivalence and/or excluded middle or on one that falls to assert one or both of these principles. For example, the use theorist might at this point be attracted to supervaluationism, which denies bivalence while retaining excluded middle. Of course, he must then confront Williamson's (Vagueness) arguments to the effect that supervaluationism is unable to accommodate the phenomenon of higher-order vagueness.
20 Notice that if, like Horwich, we accept the claim that meaning is compositional, then my analysis of the Heap Case quickly turns into an argument for a completely general thesis of strong meaning indeterminacy. If meaning is compositional, then, insofar as the Heap Case shows that the meaning of ‘…best explains the Speakers’ use of “heap“’ is strongly indeterminate, it shows that the meanings of that predicate's constituent expressions are strongly indeterminate. And insofar as the meanings of ‘best,’ ‘explain,’ ‘Speaker,’ and the like are strongly indeterminate, then so are the meanings of all of the sentences they can enter into, and the meanings of all the words figuring in those sentences, and so on until strong meaning indeterminacy is pervasive. Due to compositionality, strong indeterminacy is like a spot of dye in a pool of water: what begins as a small drop quickly spreads. The only direct way for the use theorist to resist this argument is by rejecting the compositionality of meaning. I doubt the use theorist will rejoice over this Option; Horwich's use theorist certainly will not. Yet even if compositionality is rejected, there is still the strong indeterminacy affecting the meaning of ‘…best explains Speakers’ use of “heap“’ in the Heap Case; and if we can engineer the Heap Case, we just as easily engineer the Red Case, the Bald Case, the Thin Case, the Rieh Case, the Tall Case, and so on for any number of vague predicates. Even these isolated instances of strong meaning indeterminacy are inconsistent with the Classical View, and the use theorist cannot avoid them.
21 One might object that the Overdetermination Argument relies upon a defective notion of a speech Community. If two populations just happen to use ‘heap’ with the same meaning, this does not imply that they form a Single speech Community. Indeed, if two isolated populations turn out, by happy coincidence, to speak English, this does not imply that they form a Single Community of English Speakers. Speech communities are not merely communities in which a fixed set of words possess the same meaning, but communities in which the meanings of those words are fixed by the same use regularities. Perhaps this is so. In that case, what the Overdetermination Argument boils down to is the claim that my Heap Case presents us not with a Single speech Community that is divided, at the level of use, into two distinct subpopulations, but with different speech communities that differ subtly at the level of use.
22 Alternatively, suppose that a and b are such that, if a best explained Speakers’ use of ‘heap’ in some counterfactual Situation, then a would determine a different meaning for ‘heap’ than b, were b to provide the best explanation of Speakers’ use of ‘heap’ in some other counterfactual Situation. In that case, if the meaning of ‘heap’ is always precise, then a and b should never provide equally good (or bad) explanations of Speakers’ use of ‘heap.’
23 Williamson characterizes default principles as ranging over utterances, not sentences. For continuity I will continue to speak in terms of sentences, but we remain free to say that the vagueness of a sentence derives from the possibility of someone's using it to make a vague utterance.
24 At one point, after saying he is not sure how to account for our knowledge of default principles, Williamson notes that this poses no Special problem for the epistemic theory and that it leaves ‘plenty of scope for further inquiry into the nature of stipulation’ (‘Reply,’ 951). However, the Suggestion here simply seems to be that, in lieu of knowledge of default principles, we can rely upon stipulation to explore the space of possibilities.
25 Williamson, ‘Reply,’ 951. To say that a default principle such as Default-F is a ‘supervenience conditional’ amounts to this: Where S is a vague sentence and c is a context borderline for S, Default-F says that if Speakers neither assent to nor dissent from S in c, then S is false. Default-F is thus a supervenience conditional whose antecedent is a situated pattern of use and whose consequent is the value (F).
26 For example, he says, ‘The epistemic theory of vagueness makes the connection between meaning and use no harder to understand than it already is. At worst, there may be no account to be had, beyond a few salutary remarks. Meaning may supervene upon use in an unsurveyably chaotic way’ (Vagueness, 209).
27 In Horwich's terminology, use must determine, but need not DETERMINE, meaning.
28 I would like to thank Brad Cohen, Chris Hill, Joe Moore, and especially Timothy Williamson for helpful discussion and/or comments on previous drafts of this paper. Thanks also to the National Endowment for the Humanities for providing some financial support.