1 Introduction
Inferentialism is a theory in the philosophy of language which claims that the meanings of expressions are constituted by inferential roles or relations, rather than truth and reference [Reference Brandom1, Reference Brandom2, Reference Steinberger, Murzi, Hale, Wright and Miller53]. It naturally lends itself to a proof-theoretic semantics, where meaning is understood in terms of inference rules applied within a proof system, instead of more traditional model-theoretic semantics. Most work in proof-theoretic semantics has been focused on logical constants, with relatively little work on the semantics of non-logical vocabulary.
This paper contributes to extending proof-theoretic semantics to encompass non-logical vocabulary. Drawing on Robert Brandom’s idea of material inference [Reference Brandom1, Reference Brandom2] and Greg Restall’s bilateralist interpretation of the multiple conclusion sequent calculus [Reference Restall, Hájek, Valdés-Villanueva and Westerståhl37, Reference Restall38], I present a proof-theoretic semantics for atomic sentences and their constituent names and predicates that is analogous to standard model-theoretic semantics. Material inferences are those which are valid in virtue of their non-logical vocabulary. For example, from ‘Paula is a platypus’ to ‘Paula is a monotreme’. Brandom’s claim is that names and predicates are governed by structurally different material inference rules, with the former but not the latter’s material inferential relations always being symmetric. For example, from ‘Clark Kent flies’ to ‘Superman flies’ and vice versa. These material inferential relations between atomic sentences are represented in a bilateralist atomic system by general rule forms. Applied to subatomic systems, the symmetry and asymmetry Brandom uses to differentiate names and predicates easily fall out of simple restrictions on the general rule forms.
The resulting system has several interesting features: (1) the rules are harmonious and stable; (2) the rules are analogous to familiar model-theoretic semantics; and (3) the semantics is compositional, in that the rules for atomic sentences are determined by those for their constituent names and predicates.
I first survey existing work on inferentialist and proof-theoretic semantics for non-logical vocabulary. Second, I sketch Greg Restall’s bilateralist interpretation of the classical multiple conclusion sequent calculus, which will be expanded on in this paper, first to atomics and then to their constituents. Third, I introduce Brandom’s notion of material inference and show how it can be formalised in an atomic system—a proof system for atomic sentences. I show that it is ‘well-behaved’ in the sense of being both harmonious and stable. Fourth, I use Brandom’s distinction between names and predicates in terms of their inferential roles to extend the previous proof system to accommodate subsententials.Footnote 1 This subatomic system allows for a compositional proof-theoretic semantics analogous to the standard model-theoretic one, which is a central aim of the paper. Lastly, I finish with some brief concluding remarks on the paper and possible future research.
2 Current work
In this section I survey some of the existing literature on the topic, oriented towards the ways in which this paper’s work draws on but also differ from previous work in the field. I will begin with the philosophical background to proof-theoretic semantics (PTS) and then summarise existing applications of PTS to non-logical vocabulary.Footnote 2
Philosophically, much work in PTS takes inspiration from Gerhard Gentzen’s claim that the introduction rules of his natural deduction systems represent the definitions of the logical constants and the elimination rules the consequences of these definitions ([Reference Gentzen and Szabo13], p. 80). Gentzen’s claim naturally lends itself to a theory in which the meaning of a sentence is understood in terms of a direct verification of the sentence, expressed by the introduction rules, e.g., [Reference Dummett6, Reference Prawitz33]. The use of a sentence in assertion is tied up with its method of verification. Assertions are warranted if the speaker possesses a verification of the sentence, and so proofs can be thought of as preserving warrant for assertion from premises to conclusion. This general picture is the norm within PTS [Reference Schroeder-Heister, Haeusker, de Campos Sanz and Lopes46, p. 160]. The background philosophical commitment to understanding meaning in terms of verification and the formalisation of this within single-conclusion natural deduction systems, means that the majority of PTS leans towards intuitionistic and related logics [Reference Schroeder-Heister and Zalta48, Section 1.2]. An alternative formulation of the relation between meaning and proofs within the PTS tradition is to treat the elimination rules of a natural deduction system (or left-rules of a sequent calculus) as basic and the introduction (or right) rules as derived. This fits with an understanding of meaning, on which, instead of verification, falsification or refutation is treated as fundamental and hence leans towards dual-intuitionistic logics [Reference Prawitz and Fenstad30, Appendix 2; Reference Prawitz and Hahn32, Reference Schroeder-Heister43, Reference Schroeder-Heister, Haeusker, de Campos Sanz and Lopes46]).Footnote 3
Despite their differences, the above verificationist and falsificationist forms of PTS share three features: (i) one speech act, whether assertion or denial, is treated as basic; (ii) one kind of rule (corresponding to the basic speech act), whether introduction (right-hand) or elimination (left-hand), is treated as primary and the other derivative; and (iii) the proof systems involved are multiple-premise\ single-conclusion systems. The current paper’s theory will differ in: (i) treating both assertion and denial as basic; (ii) treating both kinds of rules as equiprimordial; (iii) the proof system being a symmetrical multiple-premise\ multiple-conclusion system.
The majority of work in PTS has been focused on the meanings of logical constants, with the extension of PTS beyond logical constants being identified by Schroeder-Heister as one of the ‘Open problems in proof-theoretic semantics’ [Reference Schroeder-Heister, Piecha and Schroeder-Heister47]. There are, however, a number of proof-theoretic approaches to the semantics of non-logical vocabulary that generalise the dominant verificationist approach.Footnote 4
Much work in the PTS of non-logical vocabulary draws on Prawitz’s work on atomic systems [29–31].Footnote 5 Prawitz defines an atomic system $\mathcal {S}$ as a pair $\langle \mathcal {L}, \mathcal {R}\rangle $ of a language $\mathcal {L}$ made up of atomic sentences, and a set of inference rules $\mathcal {R}$ , the premises and conclusions of which are atomic sentences of $\mathcal {L}$ . The inference rules of an atomic system can be seen as meaning conferring, analogous to those of a logical system. Per Martin-Löf’s early work on inductive definitions [Reference Martin-Löf and J. E.24], and Peter Schroeder-Heister and Lars Hallnäs’ work on definitional reflection [Reference Hallnäs and Schroeder-Heister14, Reference Hallnäs and Schroeder-Heister15] can be seen as characterising classes of atomic systems by placing certain restrictions on the forms of the rules in $\mathcal {R}$ .
Schroeder-Heister and Hallnäs’ theory of definitional reflection will be summarised as it is most relevant to the discussion in the rest of the paper. We begin with defining clauses of the form $A\Leftarrow B$ . A collection of clauses $\mathbb {D}_A$ , headed by an atomic sentence A, is the definition of A Figures 1–3.
In a natural deduction setting, the definition $\mathbb {D}_a$ gives rise to introduction rules which directly resemble the definitional clauses. These are the definitional closure of A. Definitional closure is accompanied by the elimination rule, or rule of definitional reflection. In a single succeedent sequent calculus setting, the definitional closure and definitional reflection are represented as in Figures 4 and 5.
Strictly speaking, the definition is what is primary and the two principles of definitional closure and definitional reflection give the definition its inferential role [Reference Schroeder-Heister, Piecha and Schroeder-Heister47, p. 273]. However, it is difficult not to read the definitional clauses within $\mathbb {D}_a$ as introduction (or right-hand) rules. This is partly syntactic, in that the clauses “look” like the introduction rules in the definition closure of a sentence. More than this, however, it reflects what Schroeder-Heister calls ‘the primacy of assertion over other speech acts such as assuming or denying,… implicit in most approaches to proof-theoretic semantics’ [Reference Schroeder-Heister, Haeusker, de Campos Sanz and Lopes46, p.160]. This is due to the directedness of definitional clauses, which lead from the defining body to the defined head [Reference Schroeder-Heister, Haeusker, de Campos Sanz and Lopes46, p. 160, 172], i.e., from license to assert the sentences in the body to that in the head. The elimination rules in the definitional reflection of a sentence then come off as secondary. This would be contrasted with a view that takes rejection or denial as basic and dualises the assertion based approach to definitional reflection [Reference Schroeder-Heister, Haeusker, de Campos Sanz and Lopes46, Sections 4 and 5].
One aspect of definitional reflection that should be noted for the following discussion is that in contrast to basic logic and similar approaches,Footnote 6 the rule of $\mathrm {Cut}$ is not used in the sequent calculus setting to justify harmony between different rules.
Rather a uniform approach is taken in both the natural deduction and sequent calculus systems. However, a relation to $\mathrm {Cut}$ may be thought of as implicit in the approach, as definitional reflection and definitional closure ensure “balance” between right and left rules as in the principal steps of a $\mathrm {Cut}$ elimination proof. For example, suppose we have the following derivation which first uses definitional closure and definitional reflection, and then an application of the $\mathrm {Cut}$ rule on A. We may eliminate this $\mathrm {Cut}$ on A, pushing the $\mathrm {Cut}$ upwards to a $\mathrm {Cut}$ on the formula $B_i$ used to derive A in the application of definitional closure:
My approach to be outlined later in this paper will share these features with that of definitional reflection. However, it will do so without taking either introduction (right-hand) or elimination (left-hand) rules as basic.
Sara Negri and Jan von Plato’s work on converting axioms to rules is also another instance of applying proof theory to non-logical vocabulary, namely that of mathematical theories [Reference Negri and von Plato25, Reference Negri and von Plato26]. Rather than formulating mathematical theories axiomatically, they show, for a large class of theories, how axioms may be converted into inference rules for the non-logical vocabulary employed in the theory. The resulting rules are analogous to those in pure logic in the sense that the proof systems involving them are normalisable and have properties analogous to the subformula property. The focus in the current paper is more general in the sense that it focuses on arbitrary vocabulary rather than mathematically vocabulary, but is more specific in that it is confined to atomic sentences and their constituents.
PTS takes the inference rules of a proof system to be meaning conferring. One way to interpret this is to say that there is nothing more to meaning than the inference rules, or perhaps that there are strictly speaking no “reified meanings” at all.Footnote 7 A second approach, is to think of inference rules as meaning determining in the sense that the proof system can be used to derive standard model-theoretic denotations.Footnote 8 A third approach, taken by Nissim Francez and his collaborators [Reference Francez8, Reference Francez9, Reference Francez and Dyckhoff10, Reference Francez, Dyckhoff and Ben-Avi11] is to think of the inference rules as determining a proof-theoretic semantic value or denotation rather than a model-theoretic one. For sentential meaning, Francez defines the semantic value of a sentence S to be a function from contexts $\Gamma $ to the set of canonical derivations of S from $\Gamma $ . Of importance for the rest of the paper, Francez has extended this form of PTS to subsentential meanings. Sentential semantic values are taken as given, and then functional abstraction is applied to extract those of the subsentential constituents. The semantic value of a subsentential expression is its contribution to that of sentences, i.e., either a pure argument or a particular function.
Like Francez’s work, the current paper’s theory will assign proof-theoretic meanings to atomic and subsentential expressions, however, with two major differences. The first is that the semantic types for different syntactic expressions will be differentiated in terms of the structure of their inference rules rather than their functional contribution to that of sentences. The second is that rather than taking the meanings of atomics as given, as with Prawitz, Martin-Löf, and Hallnäs and Schroeder-Heister, the meanings of atomic sentences will be given within the proof system itself.
Bartosz Więckowski [Reference Wieckowski56, Reference Wieckowski57] has developed a very different approach to PTS for atomic and subsentential vocabulary from those inspired by Prawitz, including that of this paper.Footnote 9 Rather than treating the meanings of atomic sentences as being given by inference rules linking atomic sentences, Więckowski treats their meaning as being given by the way in which atomic sentences are derived from the defeasible information associated with their constituent terms. In Więckowski’s subatomic systems, terms, both nominal and predicates, are assigned sets of atomic sentences in which they feature. These are their term assumptions, that can be interpreted as the defeasible information associated with the terms. An atomic sentence can be derived (via an introduction rule) from the term assumptions of its constituents if it is contained in their intersection. Atomic sentences can be eliminated to derive term assumptions for each constituent that are singletons containing the eliminated atomic sentence.
Więckowski understands the meaning of atomic sentences as determined by the introduction rules (the derivations from term assumptions of the sentence) and the meaning of terms (whether nominal or predicate) to be determined by the term assumptions associated with it in normal form subatomic derivations. Więckowski’s is a very different take on PTS for atomics and subsententials than that of Prawitz, Martin-Löf, Hallnäs and Schroeder-Heister. In what follows the theory I will be presenting is more similar to the latter than Więckowski’s, in as much as it takes both the meanings of atomics and their constituents to be determined by inference rules rather than treating the two as different.
The theory outlined in this paper draws heavily on Robert Brandom’s work [1–3]. Although Brandom provides philosophical views about the meanings of non-logical expressions there is little work formalising them. Brandomian approaches have tended to focus on formalising the notion of logical vocabulary “making explicit” underlying material inferential relations. E.g., Negation as explicating incoherence and the conditional as explicating material entailment [Reference Brandom and Aker4, Reference Lance21, Reference Lance and Kremer22, Reference Lance and Kremer23, Reference Piwek28]. Recent work by Hlobil [Reference Hlobil, Arazim and Danák18] and Shimamura [Reference Shimamura, Baltag, Seligman and Yamada50] have both begun with a non-monotonic consequence relation for an atomic base and extended this with logical vocabulary within a sequent calculus, where the entailments within the atomic base are used as axioms in the logical sequent calculus. The theory that follows will differ from this work in two respects. First, like [Reference Brandom and Aker4], it will present a monotonic system, similar to the incompatibility entailments discussed in [Reference Brandom3]. Second, it will present a sequent calculus for the atomic base itself rather than treating this as given. Recent work by Stovall [Reference Stovall54] has provided an inferentialist expressivist theory of characterising generics and shown how existing theories of generics can be formalised proof-theoretically. However, unlike the theory presented in the this paper, Stovall focuses on a particular kind of non-logical vocabulary rather than the general case.
3 Bilateralism
In this section I briefly sketch a bilateralist interpretation of the classical multiple conclusion sequent calculus, drawing on the work of Greg Restall [37–39]. In the following sections, it is extended to atomic sentences and their constituents, predicates and names.
Bilateralism is a kind of semantic pragmatism—the claim that meaning (semantics) depend on use (pragmatics), where the point of attributing meanings is to explain (or prescribe) aspects of use. Expounding this position Brandom says:
[I]t is pointless to attribute semantic structure or content that does no pragmatic explanatory work. It is only insofar as it is appealed to in explaining the circumstances under which judgments and inferences are properly made and the proper consequences of doing so that something associated by the theorist with interpreted states or expressions qualifies as a semantic interpretant, or deserves to be called a theoretical concept of content [Reference Brandom1, p. 144].
Importantly, this is a normative rather than dispositional pragmatism (e.g., [Reference Horwich19])—meaning is determined by norms of correct use not patterns of actual use. This leaves open the question: which norms of use? Bilateralism and unilateralism are two different answers to this. Unilateralists are those, such as [Reference Brandom1] and [Reference Dummett6], who answer that the single speech act of assertion determines meaning. There are many kinds of bilateralism [Reference Francez7, Reference Price34, Reference Restall, Hájek, Valdés-Villanueva and Westerståhl37, Reference Rumfitt41]. However, what is shared between them is the view that norms of both assertion and denial determine meaning, where denials of A are not simply assertions of $\neg A$ . Here I adopt Restall’s take on bilateralism as an interpretation of the classical multiple conclusion sequent calculus. He gives three main motivations for bilateralism:
-
1. Many speakers appear to (developmentally) be able to deny propositions, before being able to assert negations.
-
2. Assertion and denial provide a framework for both classical and some non-classical logics. Classical logicians treat the assertion [denial] of a proposition together with its negation as incoherent, whereas many non-classical logicians will treat either the assertion or denial of both propositions together as coherent.
-
3. It shows how consequence relations place cognitive constraints on us. Asserting A does not require one to assert all of its logical consequences nor actively form beliefs about them. Rather, asserting A rules out denying A’s consequences. Similarly, denying A rules out asserting A’s antecedents [Reference Restall, Hájek, Valdés-Villanueva and Westerståhl37].
To simplify talk about interactions between assertions and denials we introduce the notion of a position
made up of the possibly empty finite multisets of sentences asserted $\Gamma $ and denied $\Delta $ . These positions are bound by norms of coherence and incoherence. $\Gamma \vdash \Delta $ will be written to mean that the position $\Gamma :\Delta $ is incoherent. This provides a natural reading of the classical multiple conclusion sequent calculus. Below are some standard rules for the logical connectivesFootnote 10 :
Read the turnstile as recording that it is incoherent to assert all to the left of the turnstile together with denying all to the right. The left rules [ $\mathrm {L}$ ] govern assertions and the right rules [ $\mathrm {R}$ ] govern denials. Top-to-bottom, each rule says that if the positions above the line are incoherent, then so is the position below the line. If we contrapose this reading, then reading the rules bottom-to-top, they say that if the position below the line is coherent, then so is at least one of those above the line. E.g., $\neg {L}$ says that if is incoherent to deny A, in the context of asserting all of $\Gamma $ and denying all of $\Delta $ , then it is also incoherent to assert $\neg A$ in the same context. Read the other way, it says that if is coherent to assert $\neg A$ in some context, then it is coherent to deny A also. The two negation rules have the effect of making the assertion [denial] of a negation and the denial [assertion] of its negand have equivalent force. A logic with truth “gaps” or “gluts” will modify these to remove one or both of these equivalences.Footnote 11
The structural rules in Figure 6 also have natural readings as governing assertions and denials in general. $\mathrm {Id}$ records the basic incoherence of asserting and denying the same thing.Footnote 12 In contrast, $\mathrm {Cut}$ read bottom-to-top tells us that assertion and denial are exhaustive in the sense that if a position $\Gamma :\Delta $ is coherent, then extending that position with one of either asserting or denying A results in a coherent position. Top-to-bottom, it tells us that if neither the extension of a position $\Gamma : \Delta $ with the denial nor the assertion of A is coherent, then the incoherency is within the position $\Gamma : \Delta $ itself, i.e., $\Gamma \vdash \Delta $ . Weakening $\mathrm {K}$ and contraction $\mathrm {W}$ follow naturally, as, for the former, once a position is incoherent adding in more assertions or denials won’t remove this, and for the latter, the number of times an assertion or denial is made does not matter for the position’s coherence or incoherence.Footnote 13 The structural rules in Figure 6, except $\mathrm {Cut}$ , will be assumed for the atomic and subatomic systems discussed in the rest of the paper. However, the systems under discussion will have the feature that $\mathrm {Cut}$ is an admissible rule.
4 Material inference and atomic systems
Restall’s bilateralism provides an inferentialist semantics for logical vocabulary, where the meanings of logical constants are their inferential roles, represented by rules in a proof system. What though, about non-logical expressions and logically atomic sentences? This section shows how a notion of material rather than formal logical inference can accommodate these cases and be appropriately represented in a proof system. First, the notion of material inference is introduced. Second, proof systems for material inferences are defined, and third given general rule forms. These are then shown to be well behaved.
4.1 Material inference
An inferentialist semantics that goes beyond logical vocabulary requires a notion of valid inference other than just that of being logically valid. To see this, suppose that ‘valid inference’ were assimilated to ‘logically valid inference’. The following is an example of a logically valid inference
This has the logical form $A\rightarrow B, A\vdash B$ . That the conclusion follows from the premises need have nothing to do with the meanings of A and B. The problem for an inferentialist who restricts their inferences to only those that are logically valid, is that logically valid inferences cannot, in general, tell us about the meaning of the non-logical vocabulary involved. Rather than just those inferences that are valid in virtue of their logical form, inferentialists need to also attend to those that Brandom calls material inferences [Reference Brandom1, Chapter 2, IV.2., Reference Brandom2, Chapter 1, Section 5]. These are inferences that are valid in virtue of their conceptual contents or non-logical vocabulary. E.g.,
This inference isn’t logically valid as it is an argument of the (logical) form $p\vdash q$ . Rather, it is valid in virtue of the contents of ‘platypus’ and ‘monotreme’. It is these conceptual contents constituted by material inferential relations which can be made explicit by logical vocabulary.Footnote 14 Logically valid inferences can be understood as those materially good inferences which remain good when holding logical vocabulary fixed and substituting arbitrary non-logical vocabulary. They are good in virtue of the contents of logical expressions [Reference Brandom1, p. 104, Reference Brandom2, p. 55].
4.2 Atomic systems, local soundness and completeness
Using Brandom’s notion of material inference as a philosophical basis, I now formalise this idea by modifying Dag Prawitz’s notion of an atomic system, discussed in Section 2 This notion of an atomic system includes an assignment function in addition to a language and a set of rules.
-
Atomic System: an atomic system is a triple $\langle \mathcal {L} , \mathcal {R} , v\rangle $ of a language $\mathcal {L}$ , a set of inference rules $\mathcal {R}$ , and an assignment function v.
By stipulation, $\mathcal {L}$ is made up only of atomic constants (this will be lifted in Section 5.3). $\mathcal {R}$ is made up of inference rules linking atomic formulas. v is function from subsets of $\mathcal {R}$ to expressions in $\mathcal {L}$ . Rather than restricting the rules in $\mathcal {R}$ to behave appropriately, restrictions on v are introduced throughout the rest of the paper.
As it stands, there are no constraints on the rules assigned by v. In PTS inference rules are often restricted so that they stand in the right kind of relation to one another. A common restriction on the inference rules assigned to logical expressions is that they be harmonious. In a natural deduction setting, harmony is often understood in terms of not being able to infer anything more from the elimination of an expression than from the grounds for its introduction. A further requirement, sometimes called stability, is that we can infer no less from the elimination of a logically complex sentence than from the grounds for its introduction.
Connectives like $Tonk$ [Reference Prior36] appear to violate these requirements. It violates harmony because the conclusion of the elimination rule is not contained in the premise of the introduction rule (it gains information). $Tonk$ also violates stability because the main premise of the introduction rule is not included in the conclusion of the elimination rule (it loses information).
The take on these requirements adopted here is due to Frank Pfenning and Rowan Davies, which they call local soundness and completeness (LSC) [Reference Pfenning and Davies27], with soundness corresponding to harmony and completeness to stability.Footnote 15 In a natural deduction system, the elimination rules for a connective are sound relative to the introduction rules, when every derivation involving the application of the introduction and then the elimination rules can be reduced to one involving neither. E.g., conjunction
The elimination rules are complete relative to the introduction rules when every derivation of a complex sentence can be expanded into one where first the elimination and then the introduction rules are applied. Conjunction again:
As can be seen, requiring LSC rules out $Tonk$ , because it violates both local soundness (harmony) and local completeness (stability). LSC as an approach to harmony and stability has two advantages which should be emphasised. First, it fits well with bilateralism because it need not prioritise one kind of rule over the other. Although I gave priority to the introduction rules above, the elimination rules could just as easily have been taken as basic. Second, it fits well with inference rules for atomics and subsententials because, as will be seen in the next section, each of these rules is just as much an introduction as an elimination rule. This contrasts with some other theories (e.g., [Reference Francez and Dyckhoff10, Reference Hallnäs and Schroeder-Heister15, Reference Schroeder-Heister, Pearce and Wansing44]), which prioritise one of the introduction (right-hand) or elimination (left-hand) rules and assign a particular form to the other.
In [Reference Pfenning and Davies27], LSC relates to natural deduction rather than sequent calculus. In the sequent calculus rules for logical connectives, vocabulary is only ever introduced rather than eliminated. However, as will be seen in the next section, with sequent calculus rules for material relations between atomic sentences the situation is like natural deduction where one expression is eliminated and another introduced. Thus LSC is apt for the sequent calculus as well. In the next section LSC is applied to general rules for material inference.
4.3 Concept clusters and general rule forms
A general framework for material inference rules in the classical multiple conclusion sequent calculus is now introduced, showing rules that are instances of this general form to be locally sound and complete. First, several examples of material inference rules are introduced. Then the material inference rules are represented diagrammatically along with the general form of the rules within the sequent calculus. Lastly, these general rules are shown to be locally sound and complete. Atomic systems restricted in such a way that the assignment function v only assigns rules of this form to expressions are as a result guaranteed to be locally sound and complete also.
4.3.1 Examples
Before moving on to material inference rules in general, I briefly describe three examples. First, the example of “conjunctive” relations between the atomics B, S, F, and Y Footnote 16 as represented in Figure 7.
In Figure 7 we have an R-rule for B and diagrammatic representation to its left.Footnote 17 These diagrams will become useful in the next section to represent larger languages with many inferential relations. Second, in Figure 8 are “disjunctive” and “negation”-like relations between the atomics O, E, and N.
In the first two examples, although strictly speaking involving inferences between atomic sentences, in their English translations it is the predicates which appear to be doing the work. In Figure 9 is an example where in the English it is the names doing so.
As can be seen, in each of these examples inference rules both introduce and eliminate expressions at the same time. This is a good reason to opt for an approach to harmony like LSC which does not prioritise one kind of rule over the other.
4.3.2 Concept Clusters
As will be seen in Section 4.3.3, even a small collection of expressions will require a relatively large number of rules to characterise their inferential relations. To make these inferential relations easier to visualise we can represent them diagrammatically, as in Figure 10 (though the inference rules should still be seen as conceptually prior).
These diagrams represent what we will call a “concept cluster.”
-
Concept Cluster: A concept cluster is a pair $\langle \mathbb {P}, \mathbb {C} \rangle $ of parents $\mathbb {P}$ and children $\mathbb {C}$ , where for some $m,n \geq 1$ , $\mathbb {P} = \{P_1,...,P_m\}$ and $\mathbb {P} = \{C_1,...,C_n\}$ , and at most one of $m,n \gneqq 1$ .
When $m = 1$ we call the cluster a single-parent cluster and will sometimes write $P_1$ simply as P; when $n = 1$ we call the cluster a single-child cluster, and sometimes write $C_1$ as C. In reference to the visualisation of concept clusters in Figures 7–10, when $m>1$ or $n>1$ we say that the cluster is branching, upwards or downwards respectively. When just one of $m,n =1$ we call $P_1, C_1$ the root of the cluster. Each member of $\mathbb {P} \cup \mathbb {C}$ is an atomic sentence (in Section 5 clusters made up of either predicates or names will be considered). Intuitively, a concept cluster represents an expression along with those that it is immediately semantically linked with, so as to form exhaustive and non-overlapping partitions, such as those in Figures 7–9. The rules for a cluster are represented by the lines in the diagrams: vertical lines represent inferential relations between parents and children, whereas the horizontal dotted lines represent (incompatibility) inferential relations between parents. The examples in Figures 7–9 have been concept clusters unconnected to each other, but in more complex languages, one expression may be part of many clusters, and so for their rules also (see Figure 11).
In the examples in Figures 7–9 there were rules representing three kinds of relations, those of sufficiency, necessity, and incompatibility. Within a concept cluster different roles can be identified according to these relations. Each parent is (by itself) a sufficient condition for each of its children. Conversely, each child is (by itself) a necessary condition for each of its parents. Necessity goes up, while sufficiency goes down the diagrams. Parents are incompatible with each other, whereas children are compatible, in the sense that the rules for children do not entail their incompatibility.
Figure 11 is a collection of several concept clusters linked together. Each cluster whose root is a child is marked by different coloured branches. Note that because D is a parent of both A and B, it also the root of the cluster $\langle \{D\},\{A,B\}\rangle $ .
4.3.3 Rules
Although the above diagrams make inferential relations easier to visualise, strictly speaking what’s conceptually prior are the inference rules. The diagrams represent collections of rules that have a particular, and desirable, shape. What’s needed is a general form of rules that correspond to these. A restriction can then be placed on atomic systems requiring all rules to conform to this general form. The general form of rules being considered is given in Figure 12.
Each parent and child have rules introducing [eliminating] it on the left and on the right.
-
• $C\mathrm {R}$ represents parents’ sufficiency for their children. The incoherence of a position that denies a parent is transferred to one denying a child.
-
• $P\mathrm {L_1}$ represents the other side of the same relation as in $C\mathrm {R}$ , namely, children’s necessity for their parents. The incoherence of a position asserting all the children is preserved to one that asserts a parent. Note that with weakening, incoherence of asserting all children together follows from that of asserting one.
-
• $C\mathrm {L}$ represents the way in which at least one parent is required for any children. Incoherence of asserting each parent is transferred to that of asserting the children.
-
• $P\mathrm {L_2}$ represents incompatibility between parents.
-
• $P\mathrm {R}$ is the reverse of the previous two. If some parent is required for any child, then commitment to each of the children, and rejection of all but one of the parents requires commitment to the remaining parent.
The rules have been phrased in a way which is neutral regarding the number of parents and children. In any actual cluster there will either be single-parent ( $m=1$ ) or single-child ( $n=1$ ).
The rules can be illuminated better by applying them to the examples from Section 4.3.1 Figures 13–15 are the resulting instances of “plugging in” the number of parents m and children n from each concept cluster in Section 4.3.1 to the general rules in Figure 12. As can be seen below, the general rules will produce particular rules of different shapes given particular numbers of parents and children.
The $\langle \{O,E\}, \{N\}\rangle $ concept cluster from the example in Figure 8 is a cluster with two parents ( $m=2$ ) and one child ( $n=1$ ). In Figure 13 for the cluster’s rules, the general rules have been instantiated as follows: $C\mathrm {R}$ has become the two $N\mathrm {R}$ rules. $P\mathrm {L_1}$ has become $O\mathrm {L_1}$ and $E\mathrm {L_1}$ . $C\mathrm {L}$ has become $N\mathrm {L}$ . The general form of the rule allows for multiple children to be introduced on the left. However, for those like $\langle \{O,E\}, \{N\}\rangle $ which are single-child clusters, the instance of this rule will only introduce one child on the left. $P\mathrm {L_2}$ has become $O\mathrm {L_2}$ and $E\mathrm {L_2}$ . Lastly, $P\mathrm {R}$ has become $O\mathrm {R}$ and $E\mathrm {R}$ . As there are only two parents and one child, the premises for these instances of the rule are much simpler than the general rule itself.
In Figure 14 are the instances of the general rules for the $\langle \{B\}, \{S,F, Y\}\rangle $ cluster (Figure 10). This is a single-parent cluster ( $m=1$ ) with three children ( $n=3$ ). The rules in the cluster are similar to those for $\langle \{O,E\}, \{N\}\rangle $ but with some differences due it being a single-parent, multi-child cluster (rather than multi-parent, single-child). The instance of $C\mathrm {L}$ , the rule $S,F, Y\mathrm {L}$ , therefore only has one premise sequent and introduces multiple formulas in the conclusion sequent. $B\mathrm {R}$ , the instance of $P\mathrm {R}$ , also differs in requiring derivations on the right of many children but none on the left of other parents. Lastly in Figure 15, are those for the earlier non-branching ‘Superman/Clark Kent’ cluster $\langle \{S\},\{C\}\rangle $ . This is both a single-parent ( $m=1$ ) and single-child cluster ( $n=1$ ). There is no rule corresponding to $P\mathrm {L_2}$ because there is only one parent. As will be important when discussing subsententials, clusters with branching result in asymmetric inference rules whereas the rules for those without branching are symmetric.
4.3.4 General local soundness and completeness
I now show that the above general rule forms are locally sound and complete (LSC). LSC is shown for each cluster rather than each expression. The general rule forms are divided into two groups, as shown in Figures 16 and 17. The first group are right-hand rules for children and left-hand rules for parents. The second group are the converse of the first, being left-hand rules for children and right-hand rules for parents. The second group will now be shown to be locally sound and complete relative to the first.
In terms of the visualisation of the rules in diagrams, think of the first rules as those showing movement from parents down to children or across to other parents. Parents are sufficient conditions for children (‘odd’ to ‘number’) and incompatible with one another (‘odd’ and ‘even’). Think of the second rules as those showing, in the diagrams, movement from children up to parents (from ‘single’, ‘female’ and ‘young’ to ‘bachelorette’).
Local Soundness: As can be seen in Figures 18 and 19, the second rules are sound relative to the first as one can apply the second rules to the outputs of the first rules, resulting only in the inputs for the first rules. No information is gained through the application of the second rules which is not already “contained” in the application of the first. Derivations which apply the first and then the second rules of the one cluster can be reduced to ones that do not.
Local completeness: As can be seen in Figures 20 and 21, the second rules are complete relative to the first as one can apply the first rules to the outputs of the second rules, resulting in the inputs for the second rules. No information “contained” in the application of the first rules is lost through the application of the second. Derivations of children on the right and parents on the left can be expanded into ones which eliminate, using the second rules, and then introduce, using the first rules, the same expressions on the right and left respectively.
Note that on these definitions it is irrelevant which rules are chosen as “first” and “second.” The first rules could have been shown to be locally sound and complete relative to the second. This feature fits well with material rules because each both introduces and eliminates vocabulary.
What the above shows is that the general form of rules is locally sound and complete. This leads to the first restriction on the assignment function v of the atomic systems being considered. We restrict v so as to only assign rules of the general form given in Figure 12, therefore guaranteeing that any such system will be LSC as well.
One might be concerned that the two sets of rules are gerrymandered into those that give LSC rather than a principled division.Footnote 18 LSC is normally used in a natural deduction setting where rules are divided into introduction and elimination rules, which are then shown to be harmonious and stable relative to each other. In a standard sequent calculus, all rules are introduction rules, which either introduce vocabulary on one of the left or the right of the sequent. In these systems, elimination of principal cut formulas and derivation of identity sequents for arbitrary formulas often corresponding to harmony and stability.Footnote 19 Neither of these exact divisions are available in the case of material inferences. The natural deduction style division of the rules into introduction and elimination rules doesn’t work because each rule simultaneously introduces and eliminates vocabulary. The standard sequent style division into left and right introduction rules isn’t exactly fitting either for the same reason—each rules is one of a left or right introduction rule, but it’s also a left or right elimination rule. The division into first and second rules does however have a natural structure similar to the standard divisions. The first rules are left-hand rules for parents and right-hand rules for children, which would correspond to natural deduction rules eliminating parents and introducing children. The second rules are right rules for parents and left rules for children, which would correspond to natural deduction rules introducing parents and eliminating children. This sort of division fits naturally with “impure” rules that simultaneously eliminate and introduce expressions.
What has been shown so far is that the system of rules for each of the concept clusters is locally sound and complete. What hasn’t been shown is that a whole language built up from many clusters is globally so. The global correlate of local soundness is the admissibility of the structural rule of $\mathrm {Cut}$ in a system without it. Demonstrating that $\mathrm {Cut}$ is admissible requires showing that for any derivation using the $\mathrm {Cut}$ rule, there is one of the same end-sequent which does not use $\mathrm {Cut}$ .
$Cut$ ’s inadmissibility would have two undesirable features. Read top-to-bottom, it would allow for expressions which “gain information” in the sense of allowing more information to be extracted than is put in. E.g., A may be sufficient conditions for B, B sufficient conditions for C, but A not sufficient conditions for C. Read bottom-to-top, it would allow for situations where a position $\Gamma :\Delta $ is coherent, but for some sentence A, neither the extension of $\Gamma :\Delta $ with the assertion of A nor with the denial of A is coherent. Luckily, $\mathrm {Cut}$ is admissible in the above atomic system, meaning that it is globally sound (see [Reference Tanter55, Appendix B] for the proof).
The global equivalent of completeness is an identity proof for arbitrary sequents of the form $A\vdash A$ . $\mathrm {Id}$ reads as saying for any atomic sentence, it is incoherent to both assert and deny it at the same time:
Normally the identity axiom applies only to atomic sentences, and identity sequents for expressions of arbitrary logical complexity are shown to follow from $\mathrm {Id}$ and the connectives rules. A failure of general identity proofs allows for expressions which lose information, in contrast to failures of $\mathrm {Cut}$ gaining information.Footnote 20 More worryingly, in terms of assertion and denial, it allows for the coherent assertion and denial of the same sentence. The languages under discussion are atomic. What corresponds to general identity proofs in these language are ones showing that given the identity axiom for the parents of a cluster, we can derive it for children and vice versa (see [Reference Tanter55, Appendix A] for the proof).
Three objections might be raised to this take on LSC. The first questions why LSC should hold for clusters rather than individual expressions; the second, why LSC should hold at all; and the third, why LSC rather than another method is used. The first objection says that given LSC holds for individual logical expressions then it should hold for individual non-logical expressions. This misses an important difference between material inferential relations and those involving traditional logical constants. The rules for logical constants are given in terms of arbitrary expressions of a particular type, and the rules do only one of introducing or eliminating an expression. Because of this the inference rules for constants aren’t dependent on those for any other particular expression. In contrast, material rules relate particular expressions to one another, introducing one and eliminating the other. They are inherently relational, and so LSC cannot be characterised as a property of a single expression but rather a concept cluster.
The second objection says that LSC is required for logical expressions but not material, non-logical ones. Brandom, for example, claims that inasmuch as there is a notion of harmony for non-logical expressions, it differs from that of logical expressions. One might think that this undermines the case for applying LSC to material concepts. The distinction, however, that Brandom makes between logical and material expressions is that the addition of the former but not the latter must yield conservative extensions of the language, in order for logic to play an explicative role [Reference Brandom2, p. 68]. Brandom claims that the addition of material concepts may be non-conservative and this ‘non-conservativeness just shows that it has substantive content’ [Reference Brandom2, p. 71]. However, nothing said so far requires the extension of a language with a new material expression to always be conservative. It may sometimes be the case but is not required. This fits with Brandom’s claim that ‘[g]rooming our concepts and material inferential commitments… is a messy, retail, business’ [Reference Brandom2, p. 75].
The third objection claims that LSC is unnatural in a sequent calculus setting. The objection goes that LSC is fitting for natural deduction systems, but in sequent calculus the natural route is to demonstrate harmony by deriving one set of rules from the other using $\mathrm {Cut}$ (e.g., [Reference Došen5, Reference Restall40, Reference Sambin, Battilotti and Faggian42, Reference Schroeder-Heister45]).Footnote 21 However, LSC rather than $\mathrm {Cut}$ is appropriate in this context for two reasons. First, the reason that $\mathrm {Cut}$ rather than LSC is normally used for sequent calculus is because in a standard sequent calculus, $\mathrm {Cut}$ is the only rule that eliminates vocabulary.Footnote 22 In the atomic systems under discussion, however, all rules both eliminate and introduce vocabulary, meaning that LSC is applicable. Second, instead of assuming $\mathrm {Cut}$ as basic, the aim is to use the local property of LSC to show that when this holds that $\mathrm {Cut}$ is admissible globally. Assuming $\mathrm {Cut}$ from the start would get things the wrong way round. In fact, a consequence of the rules being locally sound and complete is that as long as the structural rules of weakening and contraction are present, cuts on principle constituents can be eliminated (see [Reference Tanter55, Appendix B]), in a way similar to the relation between $\mathrm {Cut}$ and Schroeder-Heister and Hallnäs’ definitional reflection.
5 Subsententials
5.1 Introduction
Moving beyond logical vocabulary and sentential semantics to that of subsententials presents an apparent challenge for inferentialism. It may not be obvious how to extend the inferentialist philosophical thesis about meaning to subsentential expressions such as names and predicates. Unlike sentences, these expressions do not stand in directly inferential relations. So they must, in some way, contribute to the inferential role of sentences [Reference Brandom1, p. 363–4], analogous to a truth-conditional semantics, where the constituents’ semantic contributions determine the whole sentence’s truth-conditions without themselves necessarily having truth-conditions. First, I use the previous inference rules for atomics and apply them to names and predicates, showing that they can accommodate Brandom’s inferentialist distinction between the two in terms of their inferential roles. Brandom’s thesis is that names and predicates are distinguished by the former only standing in symmetric inferential relations [Reference Brandom1, Chapter 6, Reference Brandom2, Chapter 4].Footnote 23 I then show show that the general rules can accommodate these relations in a compositional semantics.
5.2 Model theory
I now set up an analogy with more standard extensional model-theoretic semantics and then show how the analogous structure can be represented proof-theoretically in subatomic systems.
Model-theoretic semantics has a standard story about predicates and names. The latter are assigned single objects and the former n-tuples of objects, interpreted as them referring to individuals and properties respectively. Inferentialists need to draw this distinction in terms of inferential relations rather than kinds of reference, with Brandom arguing that names are distinguished from predicates by only standing in symmetric inferential relations. Before showing how our inference rules can accommodate this, we show that it agrees with model-theoretic semantics on the underlying structure.
To set up the analogy with model theory, take a simple language $\mathcal {L}$ with only the syntactic categories of names t, n-place predicates $P_n$ , and sentences S of the form $P_{n}t_{1},...,t_{n}$ . A model $\mathcal {M}$ is a triple of $\mathcal {L}$ , a domain D, and an assignment function v which assigns objects from D to expressions in $\mathcal {L}$ . v assigns to sentences one and only one of the truth values true or false, to names individual objects, and to n-place predicates sets of n-tuples of objects from the domain. The assignments to expressions are their semantic values. v is restricted such that the value of a sentence $P_{n}t_{1},...,t_{n}$ is the true iff the n-tuple of the values of the names within the sentence, $t_{1},...,t_{n}$ , is a member of the value of the predicate $P_{n}$ . Put less formally, names pick out individual objects, predicates sets of (n-tuples of) objects which satisfy them, and sentences are true iff the object(s) picked out by the names in the sentence satisfy the predicate in the sentence. This semantics is compositional in that the meanings (values) of sentences are a function of the meanings of their constituents. Importantly, although truth plays a central role, subsentential expressions don’t have truth values; rather, they contribute to the truth-values of sentences.
Define model-theoretic consequence such that a sentence A of $\mathcal {L}$ is a consequence of some (possibly empty) set of sentences $\Gamma $ of $\mathcal {L}$ iff there is no model in which all of $\Gamma $ are true and A is false. Truth is preserved from premises to conclusion. Using this, define “substitution-consequence” relations for subsententials like so: a name [predicate] a [F] is a substitution-consequence of another, b [G], iff for each sentence S containing a [F], the substitutional variant $S/\frac {a}{b}$ [ $S/\frac {F}{G}$ ] obtained by replacing some number of a [F] by b [G] entails S.Footnote 24 A subsentential entails another of the same category so long as truth is preserved under substitution. These substitution-consequence relations between names will always be symmetric because the values of names are single objects – the value of one is a member of that of some predicate iff the other is as well. The relations between predicates, however, can be asymmetric because their values are sets of (n-tuples of) objects. The value of one might be a proper subset of another in all valuations, allowing truth-preservation one way but not the other.Footnote 25 This structure of symmetry for names and asymmetry for predicates is shared by both model-theoretic representationalist semantics and Brandom’s inferentialism. They can agree on the general structure while still disagreeing about the relative priority of representation and inference.
5.3 Inference rules
Here I show how the general inference rules from the previous section apply to subsententials and that asymmetry and symmetry corresponds to branching and non-branching concept clusters.
Inference rules for subsententials are formulated as in $1$ and $2$ :
In rules such as $1$ for names, the names, here a and b, stand in an arbitrary predicate context, represented by $\Phi $ . In those for predicates, such as $2$ , the n-place predicates, G and F, stand with an n-tuple of arbitrary names $\alpha _1...\alpha _n$ . $\Phi $ and $\alpha _1...\alpha _n$ respectively play an analogous role to the arbitrary As and Bs in rules for connectives.
Brandom’s thesis that names and predicates are distinguished by standing in symmetric only and asymmetric inferential relations respectively can be captured by subatomic systems. Before showing this, I show that our rules in general (reproduced in Figure 22) can accommodate symmetric and asymmetric relations, corresponding to non-branching and branching concept clusters.
Instances of these general rules will be symmetric or asymmetric depending on whether their concept cluster branches, i.e., when $m>1$ or $n>1$ . If a cluster branches, the inferential relations between parents and children will be asymmetric. With upwards or downwards branching, $C\mathrm {R}$ and $P\mathrm {L_1}$ can both be used to derive $P_i\vdash C_j$ – they are different perspectives on the same inferential relation. However $Cj\nvdash Pi$ , because either only $C_1,...,C_n\vdash P_1$ with downwards branching, or only $C_1\vdash P_1,...,P_m$ with upwards branching due to $C\mathrm {L}$ and $P\mathrm {R}$ .Footnote 26 Branching clusters capture asymmetric inferential relations. With non-branching clusters, each of $C\mathrm {R}$ , $P\mathrm {L_1}$ , $C\mathrm {L}$ , and $P\mathrm {R}$ is a single-premise single-conclusion rule, allowing for $P_i\vdash C_j$ from the first two and $C_j\vdash P_i$ from the next. So the general rules can also capture the structure of symmetric inferential relations when applied to concept clusters without branching, i.e. that are both single-parent ( $m=1$ ) and single-child ( $n=1$ ). The rules go beyond Brandom’s theory by also representing relations of incompatibility. This is in part a result of bilateralism and stands in contrast to Brandom’s unilateralism. Bilateralists get incompatibility on the cheap by taking both assertion and denial, and incompatibilities between them as basic. In contrast, Brandom needs to define incompatibilities between sentences in terms of the commitment to one sentence disentitling any speaker to the other [Reference Brandom1, p. 160, Reference Brandom2, p. 43).
The story about symmetry and asymmetry so far has nothing to say about examples of names which do not stand in any substitution relations (i.e., are not intersubstitutable with any other names). Arguably they still play an inferential role, just as names which are not co-referential with any others still play a role in model-theoretic semantics. To capture this, take the identity axiom to apply not to atomic sentences but instead to names in arbitrary predicate contexts:
Substitute a name in the language, a, for $\alpha $ to get an instance of the rule for a, and then substitute a predicate F for $\Phi $ to get an instance of the rule for the atomic sentences $Fa$ . The identity axiom captures the simple sense of asserting and denying the same thing being incoherent. In propositional logic there is no simpler sense of “the same thing” than an atomic sentence, and in predicate logic the particular names and predicates are normally irrelevant. However, because subatomic systems are concerned with particular names, and particular predicates, they treat the propositional sense of asserting and denying “the same thing” as derived from that of asserting and denying the same predicate of the same name. This matches the model-theoretic notion of the value of a name never being both in and not in the extension of the same predicate.
Extending the earlier notion of an atomic system, we define a subatomic system as so.
-
Subatomic system: A subatomic system is a triple $\langle \mathcal {L}, \mathcal {R}, v\rangle $ of a language $\mathcal {L}$ made up of names, n-place predicates, and sentences of the form $P_nt_1,...,t_n$ , a set of inference rules $\mathcal {R}$ , and an assignment function v from subsets of $\mathcal {R}$ to expressions in $\mathcal {L}$ .Footnote 27
The following restrictions are set on v:
-
LSC The rules for each concept cluster are instances of the general rule forms;
-
Symmetry For names, each concept cluster has only one parent and one child;
-
Identity Each name in the language is assigned an instance of the identity axiom $\mathrm {Id_s}$ ; and
-
Compositionality For sentences, rules are assigned by substituting the predicate of the sentence into the rules for its names and names into the rules for its parents.
The first restriction simply carries over from the previous atomic systems, ensuring that our sub-atomic systems are locally sound and complete. Symmetry ensure that inferential relations for names are symmetric and allows those for predicates to be asymmetric. Identity, as discussed above, gives a subsentential notion of asserting and denying the same thing being incoherent. Compositionality ensures that the rules assigned to sentences are a function of those of their constituents and the way they combine. For example, suppose you have the sentence $Fa$ and the following rules for F and a.
By substituting F for the metalinguistic variable $\Phi $ in the a rules and a for the metalinguistic variable $\alpha $ in the F rule you get the following rules for $Fa$ .
Concept cluster for languages with subsentential structure can also be represented diagrammatically. These diagrams for sentences with subsentential structure can also be composed in a similar fashion. Suppose there are the predicate concept cluster $\langle \{D,E\},\{C\}\rangle $ and the name concept cluster $\langle \{b\},\{a\}\rangle $ in Figure 23.
Each of these clusters represents by itself the inferential relations between a cluster of names and predicates respectively. These clusters can be combined to represent the inferential relations between atomics as determined by those for their constituent names and predicates. This is done by making two copies of the predicate cluster diagram (one for each name) and three copies of the name cluster diagram (one for each predicate). Then fuse each of the b nodes from the name cluster diagrams to one (and only one) of each of the predicates on one copy of the predicate cluster diagram. Lastly, then fuse each of the a nodes to the corresponding predicate on the other copy of the predicate cluster diagram. The resulting diagram in Figure 24 represents the inferential relations between six atomics as determined by their constituent names and predicates.
In this way, concept clusters and larger collections of concept clusters for atomics can be built out of those for names and predicates. These diagrammatic representations are compositional in the same sense that they are a function of the diagrams for their constituents and the form of combination stepped out above.
6 Concluding remarks
This paper has drawn on Greg Restall’s bilateralist interpretation of the classical multiple conclusion sequent calculus and Robert Brandom’s notion of material inference to provide a proof-theoretic semantics for atomic sentences and their component names and predicates. The central notion is of a subatomic system which assigns (material) inference rules to to atomic and subatomic expressions. Various restrictions on the assignment of rules to expressions ensure that the resulting systems behave as desired. These restrictions were: (i) LSC that all rules are instances of the general rule forms. This ensures that subatomic systems are locally sound and complete; (ii) Symmetry concept clusters for names are non-branching; (iii) Identity that names are assigned instance of a subatomic identity axiom. These two, along with the first restriction, provide a analogous structure within the proof-theory to more familiar model-theoretic semantics; and (iv) Compositionality the assignment of inferences rules to atomic sentences is a function of the rules assigned to the sentence’s constituents and how they combine. This ensures that the semantics is compositional, despite having holistic features. Future research may expand on this semantics in several ways. Some possibilities include considering: non-classical, particularly substructural systems; further kinds of non-logical vocabulary, particularly those within natural languages; and further generalisation of the rule schemas.
Acknowledgements
I would like to thank David Ripley, Lloyd Humberstone, Peter Schroeder-Heister and an anonymous referee for their comments on this paper, as well as Greg Restall and Shawn Standefer for their comments on earlier versions of this work. Helpful feedback was also provided by audiences at the Third Tübingen Conference on Proof-Theoretic Semantics, the 15th Asian Logic Conference, the Australasian Association of Logic Conference, Kyoto University and Hokkaido University. This work was funded by an Australian Government Research Training Program Scholarship.