To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure firstname.lastname@example.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper gives a generalization of Jim Joyce’s 1998 argument for probabilism, dropping his background assumption that logic and semantics are classical. Given a wide variety of nonclassical truth-value assignments, Joyce-style arguments go through, allowing us to identify in each case a class of “nonclassically coherent” belief states. To give a local characterization of coherence, we need to identify a notion of logical consequence to use in an axiomatization. There is a very general, ‘no drop in truth-value’ characterization that will do the job. The result complements Paris’s 2001discussion of generalized forms of Dutch books appropriate to nonclassical settings.
This paper is a systematic exploration of non-wellfounded mereology. Motivations and applications suggested in the literature are considered. Some are exotic like Borges’ Aleph, and the trinity; other examples are less so, like time traveling bricks, and even Geach’s Tibbles the Cat. The authors point out that the transitivity of non-wellfounded parthood is inconsistent with extensionality. A non-wellfounded mereology is developed with careful consideration paid to rival notions of supplementation and fusion. Two equivalent axiomatizations are given, and are compared to classical mereology. We provide a class of models with respect to which the non-wellfounded mereology is sound and complete.
There is an interesting logical/semantic issue with some mathematical languages and theories. In the language of (pure) complex analysis, the two square roots of −1 are indiscernible: anything true of one of them is true of the other. So how does the singular term ‘i’ manage to pick out a unique object? This is perhaps the most prominent example of the phenomenon, but there are some others. The issue is related to matters concerning the use of definite descriptions and singular pronouns, such as donkey anaphora and the problem of indistinguishable participants. Taking a cue from some work in linguistics and the philosophy of language, I suggest that i functions like a parameter in natural deduction systems. This may require some rethinking of the role of singular terms, at least in mathematical languages.
A normalization procedure is given for classical natural deduction with the standard rule of indirect proof applied to arbitrary formulas. For normal derivability and the subformula property, it is sufficient to permute down instances of indirect proof whenever they have been used for concluding a major premiss of an elimination rule. The result applies even to natural deduction for classical modal logic.
In the Tractatus, Wittgenstein advocates two major notational innovations in logic. First, identity is to be expressed by identity of the sign only, not by a sign for identity. Secondly, only one logical operator, called “N” by Wittgenstein, should be employed in the construction of compound formulas. We show that, despite claims to the contrary in the literature, both of these proposals can be realized, severally and jointly, in expressively complete systems of first-order logic. Building on early work of Hintikka’s, we identify three ways in which the first notational convention can be implemented, show that two of these are compatible with the text of the Tractatus, and argue on systematic and historical grounds, adducing posthumous work of Ramsey’s, for one of these as Wittgenstein’s envisaged method. With respect to the second Tractarian proposal, we discuss how Wittgenstein distinguished between general and non-general propositions and argue that, claims to the contrary notwithstanding, an expressively adequate N-operator notation is implicit in the Tractatus when taken in its intellectual environment. We finally introduce a variety of sound and complete tableau calculi for first-order logics formulated in a Wittgensteinian notation. The first of these is based on the contemporary notion of logical truth as truth in all structures. The others take into account the Tractarian notion of logical truth as truth in all structures over one fixed universe of objects. Here the appropriate tableau rules depend on whether this universe is infinite or finite in size, and in the latter case on its exact finite cardinality.
As it is obviously easy to express how propositions can be constructed by means of this operation and how propositions are not to be constructed by means of it, this must be capable of exact expression.
This is part A of a paper in which we defend a semantics for counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to a probability measure. Because of its probabilistic nature, it allows a counterfactual ‘if A then B’ to be true even in the presence of relevant ‘A and not B’-worlds, as long such exceptions are not too widely spread. The semantics is made precise and studied in different versions which are related to each other by representation theorems. Despite its probabilistic nature, we show that the semantics and the resulting system of logic may be regarded as a naturalistically vindicated variant of David Lewis’ truth-conditional semantics and logic of counterfactuals. At the same time, the semantics overlaps in various ways with the non-truth-conditional suppositional theory for conditionals that derives from Ernest Adams’ work. We argue that counterfactuals have two kinds of pragmatic meanings and come attached with two types of degrees of acceptability or belief, one being suppositional, the other one being truth based as determined by our probabilistic semantics; these degrees could not always coincide due to a new triviality result for counterfactuals, and they should not be identified in the light of their different interpretation and pragmatic purpose. However, for plain assertability the difference between them does not matter. Hence, if the suppositional theory of counterfactuals is formulated with sufficient care, our truth-conditional theory of counterfactuals is consistent with it. The results of our investigation are used to assess a claim considered by Hawthorne and Hájek, that is, the thesis that most ordinary counterfactuals are false.
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions that express high conditional probabilities. In this paper we investigate four prominent LP systems, the systems O, P, Z, and QC. These systems differ in the number of inferences they licence (O ⊂ P ⊂ Z ⊂ QC). LP systems that license more inferences enjoy the possible reward of deriving more true and informative conclusions, but with this possible reward comes the risk of drawing more false or uninformative conclusions. In the first part of the paper, we present the four systems and extend each of them by theorems that allow one to compute almost-tight lower-probability-bounds for the conclusion of an inference, given lower-probability-bounds for its premises. In the second part of the paper, we investigate by means of computer simulations which of the four systems provides the best balance of reward versus risk. Our results suggest that system Z offers the best balance.
The multiverse view in set theory, introduced and argued for in this article, is the view that there are many distinct concepts of set, each instantiated in a corresponding set-theoretic universe. The universe view, in contrast, asserts that there is an absolute background set concept, with a corresponding absolute set-theoretic universe in which every set-theoretic question has a definite answer. The multiverse position, I argue, explains our experience with the enormous range of set-theoretic possibilities, a phenomenon that challenges the universe view. In particular, I argue that the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and as a result it can no longer be settled in the manner formerly hoped for.
Valentini (1983) has presented a proof of cut-elimination for provability logic GL for a sequent calculus using sequents built from sets as opposed to multisets, thus avoiding an explicit contraction rule. From a formal point of view, it is more syntactic and satisfying to explicitly identify the applications of the contraction rule that are ‘hidden’ in proofs of cut-elimination for such sequent calculi. There is often an underlying assumption that the move to a proof of cut-elimination for sequents built from multisets is straightforward. Recently, however, it has been claimed that Valentini’s arguments to eliminate cut do not terminate when applied to a multiset formulation of the calculus with an explicit rule of contraction. The claim has led to much confusion and various authors have sought new proofs of cut-elimination for GL in a multiset setting.
Here we refute this claim by placing Valentini’s arguments in a formal setting and proving cut-elimination for sequents built from multisets. The use of sequents built from multisets enables us to accurately account for the interplay between the weakening and contraction rules. Furthermore, Valentini’s original proof relies on a novel induction parameter called “width” which is computed ‘globally’. It is difficult to verify the correctness of his induction argument based on “width.” In our formulation however, verification of the induction argument is straightforward. Finally, the multiset setting also introduces a new complication in the case of contractions above cut when the cut-formula is boxed. We deal with this using a new transformation based on Valentini’s original arguments.
Finally, we discuss the possibility of adapting this cut-elimination procedure to other logics axiomatizable by formulae of a syntactically similar form to the GL axiom.
This is part B of a paper in which we defend a semantics for counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to a probability measure. Because of its probabilistic nature, it allows a counterfactual ‘if A then B’ to be true even in the presence of relevant ‘A and not B’-worlds, as long such exceptions are not too widely spread. The semantics is made precise and studied in different versions which are related to each other by representation theorems. Despite its probabilistic nature, we show that the semantics and the resulting system of logic may be regarded as a naturalistically vindicated variant of David Lewis’ truth-conditional semantics and logic of counterfactuals. At the same time, the semantics overlaps in various ways with the non-truth-conditional suppositional theory for conditionals that derives from Ernest Adams’ work. We argue that counterfactuals have two kinds of pragmatic meanings and come attached with two types of degrees of acceptability or belief, one being suppositional, the other one being truth based as determined by our probabilistic semantics; these degrees could not always coincide due to a new triviality result for counterfactuals, and they should not be identified in the light of their different interpretation and pragmatic purpose. However, for plain assertability the difference between them does not matter. Hence, if the suppositional theory of counterfactuals is formulated with sufficient care, our truth-conditional theory of counterfactuals is consistent with it. The results of our investigation are used to assess a claim considered by Hawthorne and Hájek, that is, the thesis that most ordinary counterfactuals are false.
This paper provides a historically sensitive discussion of Carnap’s theory of extremal axioms developed first in the late 1920s. The main focus is set on the unpublished documents of the projected second part of his manuscript Untersuchungen zur allgemeinen Axiomatik (RC 081-01-01 to 081-01-33). Carnap’s theory will be assessed with respect to two interpretive issues. The first concerns his mathematical sources, that is, the mathematical axioms on which his extremal axioms were based. The second concerns Carnap’s understanding of the relationship between the “completeness of the models” and other metatheoretic notions investigated by him at the time, most notably that of categoricity. The paper surveys Carnap’s different attempts to explicate the extremal properties of a theory and puts his results in context with related metamathematical research at the time.
We provide a Hilbert-style axiomatization of the logic of ‘actually’, as well as a two-dimensional semantics with respect to which our logics are sound and complete. Our completeness results are quite general, pertaining to all such actuality logics that extend a normal and canonical modal basis. We also show that our logics have the strong finite model property and permit straightforward first-order extensions.
An agent-centered, goal-directed, resource-bound logic of human reasoning would do well to note that individual cognitive agency is typified by the comparative scantness of available cognitive resources—information, time, and computational capacity, to name just three. This motivates individual agents to set their cognitive agendas proportionately, that is, in ways that carry some prospect of success with the resources on which they are able to draw. It also puts a premium on cognitive strategies which make economical use of those resources. These latter I call scant-resource adjustment strategies, and they supply the context for an analysis of abduction. The analysis is Peircian in tone, especially in the emphasis it places on abduction’s ignorance-preserving character. My principal purpose here is to tie abduction’s scarce-resource adjustment capacity to its ignorance preservation.
This article explores ways in which the Revision Theory of Truth can be expressed in the object language. In particular, we investigate the extent to which semantic deficiency, stable truth, and nearly stable truth can be so expressed, and we study different axiomatic systems for the Revision Theory of Truth.
The paper is concerned with the way in which “ontology” and “realism” are to be interpreted and applied so as to give us a deeper philosophical understanding of mathematical theories and practice. Rather than argue for or against some particular realistic position, I shall be concerned with possible coherent positions, their strengths and weaknesses. I shall also discuss related but different aspects of these problems. The terms in the title are the common thread that connects the various sections.
Igor Douven establishes several new intransitivity results concerning evidential support. I add to Douven’s very instructive discussion by establishing two further intransitivity results and a transitivity result.