Skip to main content Accessibility help
×
Home

Lessons from the English auxiliary system

Published online by Cambridge University Press:  03 January 2019

IVAN A. SAG
Affiliation:
Stanford University
RUI P. CHAVES
Affiliation:
University at Buffalo, SUNY
ANNE ABEILLÉ
Affiliation:
Université Paris Diderot–Paris 7
BRUNO ESTIGARRIBIA
Affiliation:
University of North Carolina
DAN FLICKINGER
Affiliation:
Stanford University
PAUL KAY
Affiliation:
University of California, Berkeley
LAURA A. MICHAELIS
Affiliation:
University of Colorado Boulder
STEFAN MÜLLER
Affiliation:
Humboldt-Universität zu Berlin
GEOFFREY K. PULLUM
Affiliation:
University of Edinburgh
FRANK VAN EYNDE
Affiliation:
University of Leuven
THOMAS WASOW
Affiliation:
Stanford University
Rights & Permissions[Opens in a new window]

Abstract

The English auxiliary system exhibits many lexical exceptions and subregularities, and considerable dialectal variation, all of which are frequently omitted from generative analyses and discussions. This paper presents a detailed, movement-free account of the English Auxiliary System within Sign-Based Construction Grammar (Sag 2010, Michaelis 2011, Boas & Sag 2012) that utilizes techniques of lexicalist and construction-based analysis. The resulting conception of linguistic knowledge involves constraints that license hierarchical structures directly (as in context-free grammar), rather than by appeal to mappings over such structures. This allows English auxiliaries to be modeled as a class of verbs whose behavior is governed by general and class-specific constraints. Central to this account is a novel use of the feature aux, which is set both constructionally and lexically, allowing for a complex interplay between various grammatical constraints that captures a wide range of exceptional patterns, most notably the vexing distribution of unstressed do, and the fact that Ellipsis can interact with other aspects of the analysis to produce the feeding and blocking relations that are needed to generate the complex facts of EAS. The present approach, superior both descriptively and theoretically to existing transformational approaches, also serves to undermine views of the biology of language and acquisition such as Berwick et al. (2011), which are centered on mappings that manipulate hierarchical phrase structures in a structure-dependent fashion.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2019 

1 Introduction

English makes an important distinction between auxiliary and non-auxiliary phenomena. The data in (1) illustrate what have been referred to (see Quirk et al. (Reference Quirk, Greenbaum, Leech and Svartvik1985), Warner (Reference Warner1993)) as the nice properties, or as nicer properties, once we factor in the ability of auxiliaries to be stressed or to combine with the particles too, so or indeed to perform a ‘rebuttal’ function (see, e.g. Jackendoff (Reference Jackendoff1972) and Pullum & Zwicky (Reference Pullum and Zwicky1997)).Footnote [1]

There are of course many other distinctive properties of EAS include, and one of which that has remained unaccounted for in any previous account that we are aware of – including Hudson (Reference Hudson1976a), Gazdar et al. (Reference Gazdar, Pullum and Sag1982), Starosta (Reference Starosta1985), Lasnik (Reference Lasnik, Campos and Kempchinsky1995), Kim & Sag (Reference Kim and Sag2002), Lasnik, Depiante & Stepanov (Reference Lasnik, Depiante and Stepanov2000), and Freidin (Reference Freidin2004) – is that auxiliary do is ‘necessary whenever it is possible’ (Grimshaw Reference Grimshaw1997). Thus, in a nicer construction, a finite non-auxiliary head is replaced by the corresponding form of auxiliary do and the base form of the head:

The EAS is also special in that it has played a pivotal role in shaping linguistic theory. Chomsky’s (1957) analysis of EAS in Syntactic Structures, a variant of earlier treatments in Chomsky (Reference Chomsky1955, Reference Chomsky1956), argued that EAS must be analyzed in terms of movement operations like those whose effect is sketched in (3).Footnote [2]

The existence of discontinuous dependencies such as these (e.g. between have and en) was taken to motivate a framework based on transformational operations (movement rules), an assumption that has persisted until now (Berwick & Chomsky Reference Berwick and Chomsky2008, Berwick et al. Reference Berwick, Pietroski, Yankama and Chomsky2011), even though it has been quite widely challenged.Footnote [3] Crucially, the movement-based analysis of EAS also permeated discussions of broader issues, such as debates about learnability. For example, in Piattelli-Palmarini (Reference Piattelli-Palmarini1980: 40) Chomsky argues that although children make many errors during language learning, they do not produce errors such as *Is the man who here is tall?, where the auxiliary in the relative clause is fronted rather than the auxiliary in the main verb phrase, i.e. Is the man who is here tall? The experimental results in this domain like those of Crain & Nakayama (Reference Crain and Nakayama1987) suggested that subjects interpret sentences like (3a) as (3b), not (3c).

Ambridge et al. (Reference Ambridge, Rowland and Pine2008) note several design flaws in Crain’s experiments, and describe two elicited production studies which revealed that children occasionally do produce the supposedly non-existent type of sentences. In forming polar interrogatives of sentences with two instances of ‘can’, around 20% of children’s responses involved either doubling the auxiliary (Can the boys who can run fast can jump high?) or exactly the type of error that Chomsky claimed never occurs, e.g. Can the boy who run fast can jump high? The results were similar with sentences involving two occurrences of ‘is’ (Is the boy who washing the elephant is tired?). Ambridge et al.’s conclusion is that the ‘data do not provide any support for the claim that structure dependence is an innate constraint, and that it is possible that children form a structure-dependent grammar on the basis of exposure to input that exhibits this property.’

Moreover, as Bob Borsley (p.c.) notes, the example in (4a) is particularly unhelpful because modals like can show no agreement and because the non-finite form of a regular verb is identical to the non-third person singular present tense form. If all examples were as unhelpful as this there might really be a Poverty of the Stimulus issue, but they are not. Berwick and Chomsky’s example becomes more helpful if eagles is replaced by an eagle, i.e. Can an eagle that flies eat? Here it is clear that can is associated with eat and not flies because the latter can only be a third person singular present tense form. In fact, all the child has to learn is that a clause-initial auxiliary is followed by its subject and its complement, and there is plenty of evidence for this, specially when the auxiliary agrees with the subject, as in Has/*Have Kim gone home?, Have/*Has the boys gone home?, Is/*are Kim going home?, etc. Whatever grammatical approach is assumed, it is clear that from simple examples a learner can note that polar interrogatives involve an auxiliary followed by its subject and its complement. Once the learner knows this she will have no problems from more complex examples. In other words, whether or not learners encounter auxiliary-initial clauses with subjects containing a relative clause is unimportant. They can learn the key properties of auxiliary-initial clauses from simple examples and will have no problem with more complex examples whether or not they have encountered them. Indeed, a range of computational modeling results suggest that SAI can be learned from the data alone, such as Lewis & Elman (Reference Lewis and Elman2001), Reali & Christiansen (Reference Reali and Christiansen2005), Clark & Eyraud (Reference Clark and Eyraud2006), and Bod (Reference Bod2009). For more discussion see also Estigarribia (Reference Estigarribia2007: 14–16), Pullum & Scholz (Reference Pullum and Scholz2002), Scholz & Pullum (Reference Scholz, Pullum and Stainton2006) and Clark & Lappin (Reference Clark and Lappin2011: Ch. 2). In response to such recent positive learnability results for various classes of CFGs (Context-Free Grammars), Berwick & Chomsky (Reference Berwick and Chomsky2008: 383) write that ‘such work does not even address the AFP [auxiliary fronting problem] as originally posed, since the original formulation employs the notion ‘front’, i.e., ‘move’, not a part of the CFGs or alternatives used in these recent challenges.’ This is the weak link in the argument. Even if we all agree that grammars for human languages have no transformations whose structural analysis requires counting the number of auxiliaries, we need not ipso facto believe that such grammars have structure-sensitive transformations. Several traditions of non-transformational grammar are well established in the field, including Categorial Grammar (Steedman Reference Steedman1996, Reference Steedman2000), Head-Driven Phrase Structure Grammar (Pollard & Sag Reference Pollard and Sag1994, Ginzburg & Sag Reference Ginzburg and Sag2000), Lexical-Functional Grammar (Bresnan Reference Bresnan2001) and not one of these is even mentioned in recent discussions of EAS, even though the literature contains several comprehensive and reasonably precise analyses, published in highly accessible venues.

Chomsky claims that syntactic knowledge consists in large part of a transformational mapping from phrase structures to phrase structures (what is now referred to as ‘Internal Merge’). But if grammatical rules directly generate linguistic structures without any transformational mappings, then ‘what people know’ about EAS includes no fronting rules, and the entire issue of whether or not transformations make reference to phrase structure is rendered moot.

In this paper, we present a new, non-transformational analysis of EAS, show that it answers the various concerns that have been raised in the half century of extensive research on this topic. Moreover, we show that it handles idiosyncrasies never properly treated (to our knowledge) in transformational terms, arguing that this analysis is a plausible candidate for ‘what people know’ about EAS. As we will see, this analysis has no ‘structure-dependent operations’. As a corollary, Chomsky’s famous argument for the Poverty of the Stimulus based on EAS collapses. The structure of the paper is as follows. Section 2 provides a brief and non-technical exposition of the account. Section 3 presents the theoretical framework in detail, and lays out the basic foundations of the grammar. Finally, the next sections focus on the EAS, and in particular, with Inversion, Ellipsis, Negation, and Rebuttal, respectively.

2 Toward a constructional analysis

Despite the success of constraint-based, lexicalist frameworks (including Hudson (Reference Hudson1976b), Gazdar et al. (Reference Gazdar, Pullum and Sag1982), Warner (Reference Warner2000), and Kim & Sag (Reference Kim and Sag2002)) in analyzing the lexical idiosyncrasy that surrounds such matters as inversion, negation, and contraction, the fact remains that all such accounts have failed to provide a satisfactory account of the restricted distribution of unstressed do, i.e. the famous contrast between pairs like (5a, b):

It is sometimes thoughtFootnote [4] that such contrasts can be explained in pragmatic terms. For example, one might try to explain the deviance of (5a) as pragmatic preemption, given the availability of the synonymous (6):

Such an account would presumably appeal to Grice’s (Reference Grice, Cole and Morgan1975) maxims of Quantity (‘Be brief’) and/or Manner (‘Avoid Prolixity’). Falk (Reference Falk1984) instead assumes that (6) preempts (5a) by a grammatical principle equivalent to the principle of Economy of Expression discussed in Bresnan (Reference Bresnan, Dekkers, van der Leeuw and van de Weijer2000). Freidin (Reference Freidin2004) makes a similar proposal in terms of his principle of ‘Morphological Economy’. In all such accounts, (5b) is meant to avoid preemption because some further meaning is being conveyed that (6) cannot express.Footnote [5] According to Economy of Expression, syntactically more complex sentences are preempted by the availability of syntactically less complex sentences that are semantically (or functionally) equivalent. Morphology thus competes with, and systematically blocks, syntax.

The trouble with such explanations is that they explain too much. First of all, they seem to leave no room for dialects of the sort reported by Palmer (Reference Palmer1965) (see Klemola (Reference Klemola, van Ostade, van der Wal and van Leuvensteijn1998) and Schütze (Reference Schütze2004) for further discussion), where examples like (5a) are apparently fully grammatical and completely synonymous with (6). In addition, the competition-based approach would lead us to expect, incorrectly, that examples like the following should also be preempted:

Similarly, optional ellipsis, as in the following sentences, should not be possible:

These are worrisome incorrect predictions that preemption-based theories are hard-pressed to avoid. One might try to correct this problem by resorting to distinctions of finer grain, but it is difficult to find independent motivation for a semantic or functional distinction between contracted and uncontracted forms, between embedded clauses with or without that, or between deaccented and elided expressions (for example).Footnote [6] In any case, an analysis along these lines would have to explain why (5a) is not semantically/functionally distinct from (6), since otherwise the preemption-based explanation for (5a) would be undermined. The contrast between (5a, b) appears to be the kind of problem that should be accounted for not by general principles of preemption, but rather by the particulars of the grammar of English.

In the remainder of this section we present a preliminary informal version of the analysis detailed later in the paper, along with some of the basic facts that motivate key elements of the analysis. Some aspects of this account may be familiar from earlier non-transformational analyses, but there are crucial innovations, which we will highlight. Central to the present account of the English auxiliary system is the distinction between auxiliary verbs and auxiliary constructions. The latter are just the NICER environments, and they all require verbal daughters that bear the feature aux +. Some of these environments are modeled with lexical constructions (i.e. constructions that apply to aux + verbs) others are modeled with phrasal constructions (i.e. constructions where the head daughter is required to be aux +). Conversely, auxiliary verbs are just those verbs that can take the value ‘+’ for the feature aux. But unlike previous non-transformational accounts, ours does not treat most auxiliary verbs as intrinsically aux +; rather, the feature aux is left lexically unspecified for (most of) these verbs and is resolved as ‘–’ or ‘+’ constructionally. Non-auxiliary verbs, on the other hand, do have the lexical specification aux–. Hence, only auxiliary verbs occur in the NICER environments, but the converse does not hold; that is, (most) auxiliary verbs can appear in non-auxiliary constructions. For example, an auxiliary verb like could is allowed in a non-auxiliary construction like (9a), but a non-auxiliary verb like help may not occur in an auxiliary construction as (9b) illustrates.

There is, however, (at least) one verb that is lexically marked aux +. That is the auxiliary do. This fact accounts for a critical distributional difference between do and modal verbs, namely, that the latter but not the former occurs as the head of a simple, non-negative declarative without heavy stress, a paradigmatic aux– environment.

Because it is lexically aux +, the auxiliary do is only consistent with the NICER constructions. Crucially, notice that it is the construction, not the semantic/pragmatic function that determines what environments take aux + verbs. Thus Is Pat ever dishonest requires an aux + verb (cf *Seems Pat ever dishonest) whether used as a question or as an exclamation, and Lee saw you? can be used as a question (with appropriate prosody), with an aux– verb, because it does not involve the inversion construction.

A second feature, inv, signals whether the auxiliary precedes the subject. Thus, in Is Sam happy? the verb is specified as inv +, and in Sam is happy? the verb is specified as inv–. Like the feature aux, invis generally lexically underspecified on auxiliary verbs. The value of these features in any particular use of an auxiliary verb is determined by the constructions it occurs in. One exception to this lexical underspecification of invis the first-person singular use of aren’t, as in Aren’t I invited? (cf. *I aren’t invited). Hence, the latter verb use is necessarily inv +, and consequently can only appear in inversion constructions. For many speakers, the deontic use of shall is also limited to inverted environments, as illustrated by contrasts like the following, originally due to Emonds, as cited by Chomsky (Reference Chomsky1981: 209).Footnote [7]

Here there is a semantic difference between the auxiliary verb shall in (11a) and the one in (11b): the former conveys simple futurity whereas the latter has a deontic sense. This is similarly accounted for by marking deontic shall as inv +.

Finally, the grammar must prevent auxiliary verbs like do and be from combining with VPs headed by a grammatical verb as (12) and (13) illustrate. We use the feature gram in order to distinguish the grammatical verbs (be, have, and do in our grammar) from all others. Whereas the former are lexically specified as gram +, all others are gram–.

Thus, the ungrammatical examples in (12) can be ruled out by requiring that the complement of auxiliary do and auxiliary be must be gram–.Footnote [8]

The constructional rules of the grammar (lexical or phrasal) can impose particular constraints on aux, inv and gram. Thus, the construction rule for non-inverted clauses requires that the head VP be specified as aux– and inv– and that it precedes its complements. Most verbs can in principle head a VP, auxiliary or not, with the exception of verbs that are obligatorily aux + such as unstressed do, or verbs that are obligatorily inv + such as first-person aren’t and deontic shall. Hence, we license VPs like read a book, can sing, will win, etc. However, some auxiliary verbs like the modal better are lexically specified as inv–, and therefore are only compatible with non-inversion constructions:

The rules for the NICER constructions all stipulate that the verbal daughter must be resolved as aux +. This prevents non-auxiliary verbs, which are lexically aux–, from appearing in any of these environments. For example, the construction rule for Inversion requires the auxiliary verb to precede all of its arguments. This rule interacts with other NICER constructions to license a range of different valence patterns, such as Do you sing?, Do you not sing?, Do you?, or Boy, did I like your singing!

The Post-Auxiliary Ellipsis construction, in contrast, is lexical, and applies only to aux + verbs in order to suppress the most oblique complement. More specifically, the PAE rule takes an aux + verb that selects a subject and a complement, and licenses a counterpart of this verb that no longer subcategorizes for the complement. The semantics of the missing phrase is anaphorically recovered from the semantic context of the utterance; see Ginzburg & Sag (Reference Ginzburg and Sag2000), Jacobson (Reference Jacobson and Johnson2008), Culicover & Jackendoff (Reference Culicover and Jackendoff2005), and Miller & Pullum (Reference Miller, Pullum, Hofmeister and Norcliffe2013), thus predicting that ellipsis need not have overt antecedents, and its immunity to island constraints, among other things. By requiring that the ‘input’ verb be specified as aux +, this rule crucially limits the verbs that can appear in the construction to auxiliaries, accounting for contrasts like (15).

The PAE rule licenses verbs with an underspecified aux feature, in order to account for the fact that auxiliaries with elided complements are not restricted to NICER environments. Thus, even unstressed do – normally barred from appearing in non-inverted, non-negated clauses – can now do so, e.g. Robin did / didn’t. The underspecification of aux on the higher node makes ellipsis generally possible in non-inverted, non-negated environments, as in Robin will, as long as the verb is an auxiliary.

The Negation construction is also lexical, and similarly requires an aux + verb, but instead of removing an argument from the verb’s argument structure, it adds a clausal negator to it. Thus, aux– verbs are not compatible with clausal negation (e.g. *Kim gets not hungry). Moreover, as in the case of PAE, the ‘output’ verb is underspecified for aux, allowing negated verbs to appear in otherwise simple declarative environments (e.g. Kim is not hungry), including unstressed do. Thus, whereas *Robin dĭd sing is not acceptable, its negative counterpart Robin dĭd not sing is. As we shall see, the scope of sentential negation varies with the choice of auxiliary, and therefore two distinct rules will be necessary to account for the syntax and semantics of clausal negation. For example, not outscopes can in Chris cannot accept that but is outscoped by deontic modals like may or might as in They might not like the wine.

Contraction is also accounted for by lexical rule which applies to words resulting from the Negation rule (that is, words with NOT as a complement). The rule removes not from the argument structure, computes the contraction form (if none exists, the rule is undefined and therefore cannot successfully apply), and combines the verbal semantics with the negation. As with the other lexical rules, aux is underspecified in the mother node and therefore contracted forms may appear in non-NICER environments, e.g. Robin didn’t sing.

Finally, the Rebuttal construction is a lexical rule requiring an aux + verb with a finite verb form. The rule adds the rebuttal semantics, adds phonological material, and yields a verb that is required to head an independent clause.Footnote [9] Hence, rebuttal cannot occur with non-auxiliary verbs, as in *Kim read too that book, and because AUX is underspecified in the ‘output’ of the rule, the rebuttal form may appear in non-inverted environments, e.g. Kim did too leave.

There are a number of other phenomena that the present account can capture, and a number of details that the above informal discussion has omitted. We now turn to a more detailed exposition of our constructional account.

3 Sign-Based Construction Grammar

Although the analysis of EAS presented here is in principle compatible with a number of grammatical frameworks, it is particularly well expressed in Sign-Based Construction Grammar (SBCG),Footnote [10] the synthesis of Berkeley Construction GrammarFootnote [11] and Head-Driven Phrase Structure GrammarFootnote [12] developed by Fillmore, Kay, Michaelis, Sag, together with various colleagues and students. As argued by Johnson & Lappin (Reference Johnson and Lappin1999), there are certain kinds of cross-linguistic generalizations that are difficult to state in a grammar lacking the notion of ‘construction’. Not all syntactic facts can be attributed to the properties of individual lexical items or very general combinatorial mechanisms, as assumed both in the Chomskyan mainstream and early HPSG. We argue that the EAS is one such example. Moreover, sbcg exhibits a number of important design properties that make it an attractive choice for psycholinguistic modeling. sbcg embodies Bresnan and Kaplan’s (Reference Kaplan, Bresnan and Bresnan1982) Strong Competence Hypothesis, in that it fits naturally in a processing regime where partial meanings are constructed incrementally (Tanenhaus et al. Reference Tanenhaus, Eberhard, Spivey-Knowlton and Sedivy1995, Sag & Wasow Reference Sag and Wasow2015). sbcg thus helps make sense of Fodor et al. (Reference Fodor, Bever and Garrett1974: 276)’s conclusions that linguistic representations are cognitively confirmed, but not the transformational processes that were supposed to relate them.Footnote [13]

3.1 Analytic Preliminaries

In the theory of Construction Grammar we present here (sbcg; see Boas & Sag (Reference Boas and Sag2012)), words and phrases are modeled as signs. The latter is a type of feature structure reflecting a correspondence of sound, morphology, syntactic category, meaning, and/or contextual conditions. A sign thus specifies values for the features phonology (phon), (morphological) form, syntax (syn), semantics (sem), and context (cntxt), as illustrated in the verbal sign schematically depicted in Figure 1. The feature cat  contains information about part of speech, verb form (finite, infinitival, base, etc.), inversion ( $+/-$ ), auxiliary ( $+/-$ ), and valence (as a list of signs).

Figure 1 The geometry of verbal signs (schematic).

The Attribute-Value Matrix (AVM) format seen in Figure 1 corresponds to a function that maps each of the features to an appropriate domain, specified by the grammar. Since the domain of functions of type sign is the set {phon, form, syn, sem, cntxt }, each particular sign maps each element of this domain to a different, appropriate type of complex value, i.e. another functional feature structure. The values of the feature syn (syntax) are feature structures specifying values for category (cat) and valence (val), and categories are feature structure complexes, similar to those used in $\overline{\text{X}}$ -Theory (see Appendix for more details). We are simplifying morphology by describing form values in terms of a sequence of orthographic representations, and the semantics used here is also simplified in ways which we will explain in due course.

The feature val(ence) lists arguments that are overtly and locally realized in syntax. Thus, (16) corresponds to a use of the lexeme laugh where the subject is overt and local, rather than extracted (e.g. It was Tim who I think laughed) or elided (e.g. (He) laughed all day long yesterday… or Laugh!).Footnote [14]

Lexemes are organized into a type hierarchy by the grammar signature (see the Appendix), thus allowing details of a verb to be determined by inference (the ‘logic of the lexicon’), rather than by separate stipulation for each lexical item. The lexeme hierarchy allows cross-cutting generalizations to be expressed. The hierarchy of verbal lexemes, for example, is illustrated in part in Figure 2. We use lxm to abbreviate lexeme, and aux to abbreviate auxiliary. The type of the sign in (16) is intr (ansitive)-v (erb), but other types are possible. For example, the same verb can alternatively be used as a transitive verb (e.g. Robin laughed the lyrics rather than singing them), in which case it will be typed as trans-v-lxm and as a consequence, have two valents. The same goes for all other uses of the verb, including resultatives (e.g. Robin laughed the kids off the stage or Robin laughed herself to tears), cognates (e.g. Sam laughed her maniacal laugh) directed path constructions (e.g. Sam laughed her way out of the room), to list but a few.

Figure 2 A partial view of the English lexeme hierarchy.

In this type hierarchy there is a high-level dichotomy between transitive and intransitive verbal lexemes that is orthogonal to the distinction drawn between lexical verbs and others. This allows us to establish multiple lexical classes, e.g. aux1-verb-lexeme (aux1-v-lxm) and nonaux-subject-raising-verb-lexeme (nonaux-subject-raising-v-lxm) – each of whose members have distinct class-based properties in addition to those that they all share in virtue of the common supertype subject-raising-verb-lexeme (s-raising-v-lxm). Note that the hierarchy in Figure 2 is somewhat simplified in that auxiliary verbs like equative be (e.g. This is Kim) are arguably not raising verbs; see Mikkelsen (Reference Mikkelsen2002) and Eynde (Reference Eynde2015) for further discussion. The basis for partitioning the auxiliary verbs into two classes is discussed in Section 6. Footnote [15]

Each lexical class is defined in terms of a lexical-class construction – an implicational constraint specifying the characteristic properties of its members.Footnote [16] For example, all lexemes of the type verb-lxm are required to have part-of-speech verb and exactly one external argument (the subject), as per the constraint in (17a). In this work, we assume that SBCG constraints are stated as an implicational relation of the form ‘ $\unicode[STIX]{x1D70F}\Rightarrow \mathit{C}$ ’, which states that ‘all feature structures of type $\unicode[STIX]{x1D70F}$ must satisfy the condition $\mathit{C}$ ’ (where $\mathit{C}$ is a feature structure description).Footnote [17] Similarly, all lexemes of the type lexical-v-lxm are required to be aux–, inv– and gram– as per (17b), and all lexemes of the type s-raising-v-lxm must have two arguments, the first of which is also the first argument of the second argument, as (17c) shows. Finally, (17d) requires lexemes typed as intr-v-lxm to have exactly one valent in their val list, and analogously, all the other types in Figure 2 impose constraints of varying degree of granularity on the lexemes they subsume.

Signs of type word are in turn obtained by the combination of a lexeme explicitly listed in the lexicon and the appropriate constructions. Hence, the lexeme in (16) interacts with inflectional constructions to license a sign of type word, like the following (18). Whereas constructions are constraints on classes of signs and on their components, the objects described/licensed by the grammar are constructs and are displayed inside a box.

Feature structures are also used to describe complex structures, conveying exactly the same information as more familiar local trees, as illustrated in (19). Such representations correspond to a function mapping the feature mother (mtr) to a sign and the feature daughters (dtrs) to a list of signs.

An instance of a local construction is sketched in Figure 3, in this case, involving two daughters, one nominal, the other verbal. Throughout this work, the symbols ‘S’, ‘NP’ and ‘VP’ are used as abbreviations. More specifically, ‘S’ corresponds to any AVM bearing the features [cat verb] and [val  $\langle \,\rangle$ ], ‘VP’ corresponds to any AVM bearing the features [cat verb], [inv  $-$ ], and [val  $\langle \text{XP}\rangle$ ], and so on. From now on we will omit the phon feature, for simplification.

Figure 3 A local phrasal Construct (abbreviated).

Constructions can be unary-branching, and require signs of particular types. For example, the preterite construction shown in (20) applies to any sign of (sub)type lexeme to yield a tensed and inflected counterpart of type word.Footnote [18] We use the notation ‘[feat1  X $!$ [feat2]]’ to indicate that the feature feat1’s value must be identical to the feature structure tagged as $X$ elsewhere in the diagram, except with respect to the value of feature feat2. We use cxt as an abbreviation for construct.

Hence, (20) requires that the cat value of the daughter be identical to the mother except for the value of vf  which in the daughter is required to be base form (base) and in the mother is required to be finite (fin).Footnote [19] Finally, the  function computes the preterite form of $W$ . If none exists, the construction cannot apply simply because there is no output for that input. Hence, whereas most verbs are in the domain of the function, including auxiliaries like , others like are undefined and therefore no preterit use of the latter exists.

The semantics requires some explanation. For convenience, we are assuming a ‘Montague-style’ semantics for clauses and other expressions. For example, a proposition-denoting expression is built up in ‘Schönfinkel form’, where the verb’s semantics combines with one argument at a time – e.g. first the direct object, then the subject. ‘Y(X)’ is in fact an instantiation of the more general schema ‘’ (the functional realization (of type $\unicode[STIX]{x1D70F}$ ) of the multiset  defined as follows (see Klein & Sag (Reference Klein and Sag1985) and Gazdar et al. (Reference Gazdar, Klein, Pullum and Sag1985)).

A sbcg thus defines a set of structures, each of which is grounded in lexical signs and which can be represented as a tree, much like the tree structures of a context-free grammar. However, the labels on the nodes of these trees are not atomic category names (NP, S, V, etc.), but rather feature structures of type sign, similar to the practice of frameworks like Categorial Grammar and Generalized Phrase Structure Grammar, and most contemporary work in syntax. And as in declarative reconceptualizations of phrase structure rules like McCawley (Reference McCawley1968), sbcg constructions are static constraints that license local structures. Take for instance the Subject–Predicate Clause constructions, which correspond to the most common type of clausal construct in English: simple declarative clauses like (22a), present subjunctive clauses like (22b), imperative-like clauses with subjects, like (22c), and so on.Footnote [20]

Such examples are all accounted for by the Subject–Predicate Construction in (23), which belongs to two distinct constructional classes, one pertaining to headedness (a subject–head combination) and the other pertaining to clausality (a declarative clause), an important aspect which we return to shortly.

This construction requires that the second daughter bears aux–, inv–, and vf fin specifications, which ensure that the head must be a finite verbal phrase. In general, the colon indicates that the immediately following constraint must be satisfied by all values of the immediately preceding variable. Thus, the $H$ variable in (23) requires that the mother’s cat value be identical to the second daughter’s cat value (effectively stating that the second daughter is the head of the construction) and the $Z$ variable has the effect that the valent subcategorized by the head daughter via val be identified with the first daughter $Z$ . Finally, the semantics of the head takes as argument the semantics of the subject.Footnote [21] Thus, a sentence like the one in Figure 4 is licensed by applying the construction in (23) to the sign for Tim and the sign for laughed. The presence of the tag ‘’ indicates that the sign in the val list has in fact been identified with the first daughter’s, as required by (23).Footnote [22] Agreement information is not shown, for simplification, and phonological composition rules are omitted. For convenience, we have also omitted discussion of linear ordering, by assuming that the order of elements on the dtrs list determines the order of elements on the mother’s form list.Footnote [23]

Figure 4 A Construct licensed by the Subject–Predicate Construction.

Let us now return to the fact that (23) belongs to two distinct constructional classes, one phrasal and one clausal, as seen in Figure 5. Following Ginzburg & Sag (Reference Ginzburg and Sag2000), we assume that there are a number of similar Subject–Head constructions, including the one responsible for ‘Mad Magazine’ sentences like What, me worry? (see Akmajian (Reference Akmajian1984) and Lambrecht (Reference Lambrecht1990)) and the construction responsible for absolute clauses like My friends in jail, I’m in deep trouble (see Stump (Reference Stump1985) and Culicover & Jackendoff (Reference Culicover and Jackendoff2005)). Each of these Subject–Head constructions is a non-declarative variant of (23), and specifies idiosyncratic morpho-syntactic, pragmatic, semantic and/or phonological information. The type subject-head-cxt belongs to a complex system of constructs as in Figure 6, each of which corresponds to a class of constructions exhibiting some grammatically significant set of properties. Each such property set is specified by an implicational constraint whose antecedent is the name of that subtype. In general, the mtr value of any feature structure of type phrasal-cxt is required to be of type phrase (see Appendix). The different subtypes of headed-cxt provide a more or less traditional taxonomy of local dependency relations between the head daughter and its sister(s).Footnote [24] Thus, subject-head-cx licenses subject-VP phrases, pred-head-comp-cxt licenses head–complement phrases, aux-initial-cxt licenses SAI phrases, head-modifier-cxt licenses head-modifier phrases and so on.

Figure 5 Types Generalizing over Subject–Predicate Clauses.

Figure 6 Construct type hierarchy.

Figure 7 Clausal type hierarchy.

Following Sag (Reference Sag1997) and Ginzburg & Sag (Reference Ginzburg and Sag2000), phrasal constructs are in addition organized in terms of clausal types. That is, there is a subtype of phrasal-cxt called clause (cl), which has various subtypes, including core-cl and relative-cl. The subtypes of core clause include declarative-cl, interrogative-cl, and exclamative-cl, as shown in Figure 7. Hence, we obtain the cross-classification shown in Figure 5. The variable grain of grammatical generalizations can be modeled precisely in such a type system, where idiosyncratic constraints can be imposed by a construction that defines the properties of a ‘maximal’ type (one that lacks subtypes), while constraints of full generality or of intermediate grain can be stated in terms of appropriate superordinate types, e.g. construct, or any of the subtypes of construct that the grammar recognizes.Footnote [25]

The interplay between phrasal and clausal dimensions is of particular importance for auxiliary structures. Auxiliary verbs are distinctive in that they may introduce clausal structures of various kinds; see Culicover (Reference Culicover1971), Fillmore (Reference Fillmore1999), Newmeyer (Reference Newmeyer1998: 46–49), and Ginzburg & Sag (Reference Ginzburg and Sag2000: Ch. 2.). But before moving on to SAI constructions in more detail, it is necessary to discuss the analysis of VPs. The verbal complement realization pattern typical of English VPs and predicative expressions of all categories, requires all arguments except the subject to be realized within a head–complement construct, as seen in (24). In SBCG, L-variables range over lists of feature structures and ‘ $\oplus$ ’ denotes the append relation, which concatenates two lists into one. Thus, (24) splits the valence list of the head daughter into two sublists, one containing the subject phrase $Y$ , and a list $L$ containing all other subcategorized (sister) signs.

More informally, (24) requires (a) the mother’s syn value to be identical to that of the head daughter, except for the value of val; (b) The mother’s val list to be singleton, containing just the first member ( $Y$ ) of the head daughter’s val list; and (c) the mother’s semantics to be arrived at by applying the head daughter’s sem value to the sem value of its first valent, applying the resulting function to the sem value of the next valent, and so on, until all valents (the construct daughters) have contributed. An example of the application of (24) is in Figure 8.Footnote [26]

Figure 8 A Predicational Head–Complement Construct.

We can now illustrate the analysis of a simple subject–predicate clause. As already discussed, all non-auxiliary verbs are subtypes of lexical-v-lxm and required to be aux– and inv– by the constraint in (17b). In contrast, auxiliary verb lexemes are raising verbs and subtypes of either aux1-v-lxm or aux2-v-lxm, as shown in Figure 4. Thus, auxiliaries are exempt from (17b) as (25) illustrates.Footnote [27] Consequently, the aux and inv values of auxiliary verb lexemes are not fixed.

Unlike other verbs, modals are in general finite – they appear where finite non-auxiliary verbs do, assign nominative case to their subjects, and cannot appear as complements of other verbs. All of this is straightforwardly accounted for by assuming listemes like (25), which state that the verb form is finite. Moreover, because inflectional constructions such as the preterite require the lexeme daughter to be specified as [vf base], it follows that modals cannot be inflected by such constructions. The way lexemes which do not inflect become words is via the general construction in (26).Footnote [28]

Hence, base form verbal lexemes like be are ‘promoted’ to words – and as a consequence, allowed to appear in the dtrs list of phrasal constructions – as are finite modals like can in (27), and more generally adverbs, proper names, prepositions, etc.

This analysis thus allows can to be a verb with the specifications [aux $-$ ] and [inv $-$ ] as seen below, in which case it is compatible with both the Predicational Head–Complement Construction in (24) and the Subject–Predicate Construction in (23). Consequently, sentences like the one in Figure 9 are licensed.

Figure 9 A Subject–Predicate Clause.

So, as in previous analyses using the feature aux (e.g. that of Ross (Reference Ross and Todd1969)), auxiliaries are treated uniformly as verbs. But in the present context, the feature aux has a new and quite different interpretation. In previous analyses, [aux +] elements are auxiliary verbs. Here, [aux +] elements are expressions which appear in the environments delineated in Section 1.

As opposed to the flatter structure associated with the Syntactic Structures aux analysis, the constructions discussed so far create a right-nested analysis tree for English VPs. If a finite clause includes multiple verbs, one of which is a modal, the modal must appear first because it is finite and must serve as the head of the finite VP. Moreover, the complement selected by the modal must be [vf base], as required by the modal:

A selling point of the Syntactic Structures analysis of EAS appeared to be that it stipulated the order and selection properties of aux elements in the PS rules, as in (29), an analysis that has since been abandoned:

But the cost of this proposal is excessive – it denies syntactic category membership to both have and be.Footnote [29] The alternate, more conservative strategy embraced here is to explain the restrictions on auxiliary order through the interaction of constraints on syntactic and semantic selection, and morphological gaps. The absence of non-finite forms for modals accounts for (30), while the semantic restrictions on the progressive may well be sufficient to explain the deviance of (31a–b) in the same way that the deviance of examples like (32)–(33) is accounted for. For further discussion, see Binnick (Reference Binnick1991).

The have lexeme under discussion here has to be distinguished from its homophonous non-auxiliary counterpart. Interestingly, in some British varieties, all forms of have exhibit the nicer properties, while in American varieties, non-auxiliary have is always a lexical-v-lxm, yielding the distribution in (34).Footnote [30]

The preceding discussion sets the stage for the analysis of auxiliary do, which is effected in its entirety by introducing the listeme in (35), where  is the identity function over VP meanings (i.e. this function takes VP meanings and merely outputs its input):

Since this listeme is specified as [vf fin], so are all the lexemes that it licenses. The words derived from auxiliary do must also be [vf fin], and this gives rise to signs for does, the finite forms of do and did, but not to any for done, the base form do, or doing. This is the right result for standard American varieties of English, and with a small adjustment it provides an immediate account of dialects where non-finite auxiliary do is also possible.Footnote [31] Similarly, all the words derived from the lexeme in (35) are specified as [aux +]. This has the immediate effect of making auxiliary do incompatible with the constructions that would require it to be [aux $-$ ]. Recall, however, that the [aux +] constructions are precisely the nicer constructions discussed in Section 1. Thus, nothing prevents finite do, does or unaccented forms of do from projecting a VP, but no such VP can serve as the head of a subject–predicate construct, whose head daughter is required to be [aux $-$ ]. This provides an explanation for contrasts like the following, already discussed above:

As we will see in Section 8, the grammar of reaffirmation allows for sentences like (36b) by licensing [aux +] VPs whose verb must be focused (either by accent or by the presence of a reaffirming particle). In similar fashion, the treatment of other nicer properties will provide an [aux +] environment that will accommodate auxiliary do in a subject–predicate clause that also contains negation (Chris does not object.) or VP Ellipsis (This one does_.).

Finally, notice that the co-occurrence restrictions of auxiliary do are also accounted for in our analysis. Given the lexeme in (35), it follows that the second member of do’s val list (its VP[base] complement) must be [gram  $-$ ]. This means that the VP complement of do cannot be headed by an auxiliary verb, correctly ruling out examples like those in (37), where the complement’s head daughter is be or auxiliary have:

VPs headed by a modal or auxiliary do are already ruled out by the fact that these verbs have only finite forms as signs of type word.Footnote [32]

4 Aux-Initial Constructions

Auxiliary verbs are distinctive inter alia in that they may introduce clausal structures of various kinds, as noted by Culicover (Reference Culicover1971), Fillmore (Reference Fillmore1999), Newmeyer (Reference Newmeyer1998: 46–49), and Ginzburg & Sag (Reference Ginzburg and Sag2000: Ch. 2.), and as illustrated in (38). These clauses will be analyzed in terms of a family of subject–auxiliary ‘inversion’ phrasal constructions (each with peculiar syntactic constraints) that cross-intersect with a family of clausal constructions (some interrogative, some exclamative, some conditional, etc.). Although such structures are customarily analyzed in terms of subject–auxiliary ‘inversion’, here we analyze them without movement operations of any kind.

Chomsky (Reference Chomsky2010: 9) criticizes constructional analyses of SAI – such as Fillmore (Reference Fillmore1999), and Ginzburg & Sag (Reference Ginzburg and Sag2000), and the present account – precisely because they distinguish several types of SAI construction, rather than resorting to a single operation (movement, i.e. External Merge). However, Chomsky’s critique ignores the fact that there are syntactic, semantic and pragmatic idiosyncrasies between the various SAI constructions. For example, interrogative and exclamative SAI constructions like (38a) come in different flavors, some more idiomatic than others.Footnote [33] In addition, many of these SAI construction types are restricted to a small subset of auxiliary verbs. Thus, inversions like (38c) must be negative and are restricted to the auxiliary do, (38d) are endowed with bouletic modality and restricted to the auxiliary may, (38f) have conditional meaning that is otherwise missing from all other SAI constructions and are restricted to a couple of auxiliary verbs, and (38g) require a clause-initial adverb. Every theory must be able to account for these idiosyncratic facts, and constructional frameworks can capture them directly, in terms of the specific syntactic, semantic and pragmatic constraints introduced by each kind of SAI construction. Finally, Chomsky’s objection also ignores the fact that treating the various SAI constructions as subtypes of a single type means that no generalizations are missed.

As already discussed above, SAI constructions clearly instantiate distinct sub-constructions, each involving a language-particular correlation of aux-initial form with a particular meaning, as well as other kinds of idiosyncrasy. On the other hand, they also exhibit a ‘family resemblance’ which we capture via (39).Footnote [34]

This construction requires that in aux-initial clauses, the head daughter (the first daughter, as indicated in the dtrs list) must be a [inv +] word, which guarantees that all such constructs are headed by an ‘invertible’ finite auxiliary verb. The sisters of the head daughter are identified with the elements of its val list and hence correspond to its subject and any other valents that it selects. Since the mother’s val list is empty, the constructed clause cannot combine with further valents (i.e. it is ‘valence-saturated’). In this view, learning to produce SAI constructions amounts to learning that such constructions begin with a [INV+] word that selects the subsequent expressions, and there is no reason at all to think this is mysterious, as proposed, for example, in Estigarribia (2007, 2010). This simply requires that (i) the child correctly classifies [INV+] words (which can be done on the basis of positive evidence) and (ii) that the child can produce/recognize valents that are independently syntactically licit. Importantly, this view predicts certain errors if the child miscategorizes a word as [INV+] that isn’t, or if the child builds an incorrect valent (for example, because the child doesn’t know how to form relative clauses). But there are other errors that the child can make. For example, before the (39) construction is learned, the child may use a strategy to add a [INV+] word at the beginning of a regular sentence (Estigarribia Reference Estigarribia2010), yielding auxiliary doubling, which is attested in on average 14% of child attempts in Ambridge et al. (Reference Ambridge, Rowland and Pine2008). These may be either competence or performance-related errors, but whichever the case may be, the evidence is not only consistent with a constructional account of SAI, but receives a plausible explanation within the present framework, in terms of interference of a regular Subject–Predicate construction with a finite head.

The constraint in (39) describes the common properties of aux-initial clauses. At a finer grain, we find particular varieties of aux-initial clause, each with its own distinctive meaning, as illustrated in Figure 10 for two kinds of aux-initial construct: polar-interrogative-clause (polar-int-cl) and aux-initial-exclamative-clause (aux-initial-excl-cl). The former must simultaneously obey (39) and the constraints that define interrogative-clause; the latter must simultaneously obey (39) and the constraints that define exclamative-clause. A construct of the former type is in Figure 11, licensed by the polar-interrogative-cl construction in (40).

Figure 10 Two Types of Aux-Initial Clause.

If grammars involve direct construction of signs, rather than movement, then the interrogative is not derived from an uninverted structure, as noted by Marcus et al. (Reference Marcus, Vouloumanos and Sag2003). Chomsky’s famous auxiliary puzzle in (41) thus presents no particular challenge, as there is no way for (40) to derive (41b). The puzzle of the deviance of (41c) is an artifact of a transformational approach to aux-initial word order.

An analogous treatment provides each other kind of aux-initial clause with its own semantics and grammatical restrictions, thus enabling the analysis sketched here to ‘scale up’ to account for the complete set of English aux-initial constructs.Footnote [35] The non-wh-question meaning $\unicode[STIX]{x1D706}\{~\}$ [PAST(get(the-job))(Kim)] is formed by $\unicode[STIX]{x1D706}$ -abstracting over the empty set to produce a function that maps the empty set (as opposed to a non-empty set of wh-parameters) onto the same proposition that Kim got the job denotes. See Ginzburg & Sag (Reference Ginzburg and Sag2000) for details.

Figure 11 Analysis of Did Kim get the job?

Head movement analyses of inversion have accounted for the basic pattern of alternation: tensed auxiliary verbs appear in situ and, when other factors permit, they appear in inverted position. In non-transformational accounts like Gazdar et al. (Reference Gazdar, Pullum and Sag1982) there are two possible positions where finite auxiliaries can be directly realized. Interacting factors constrain the choice. Yet in inversion too, there is a certain degree of lexical idiosyncrasy that stands as a challenge for any analysis of EAS. First, there is the well-known contrast in (42).

Whereas the auxiliary verb shall in (42a) conveys simple futurity, the one in (42b) has a deontic sense. One might think this difference in interpretation has something to do with interrogativity, rather than inversion. However, there is a further fact about such contrasts not noticed by Emonds, Chomsky, or Gazdar et al. The simple futurate reading is possible in an uninverted embedded interrogative like (42):

Moreover, it seems too strong to rule out all inverted instances of futurate shall, given the possibility of a futurate interpretation of (44):

It seems that the ‘unexpected’ fact here is the unavailability of the deontic reading in (42a). This is accomplished by positing two distinct listemes, like those in (45). By leaving futurate shall unspecified for inv, we allow inverted ‘shall’ to receive either a futurate or a deontic interpretation.

Similarly, the following pair exhibits a scope difference (examples due to John Payne, as cited by Gazdar et al. (Reference Gazdar, Klein, Pullum and Sag1985: 64)):

In (46a), the modal has scope over the negation (‘It is possible that Kim might not go.’), whereas in (46b), only the reverse scope is possible (‘Is it not the case that possibly Kim will go?’). Here, however, it seems that interrogativity, rather than inversion is determining the scope, as (46b) is paralleled by (47):

We return to this matter in Section 7.

There are some finite auxiliary verbs that cannot appear in inverted position, as already discussed in Section 2. This is the case of better, for example, as pointed out by Gazdar et al. (1982):

On distributional grounds, better is arguably a finite auxiliary. It projects a finite clause, for example. Though better cannot be inverted in questions (or other inversion constructions), it can participate in finite negation:

In the present account better is specified as inv–, which in turn means that it cannot appear in Inversion constructions.

5 Post-Auxiliary Ellipsis

The next nicer property is Post-Auxiliary Ellipsis (PAE), as in (50).Footnote [36]

It is generally assumed, following Hankamer & Sag (Reference Hankamer and Sag1976) (HS), that PAE is distinct from Null Complement Anaphora (nca), seen in (51).

PAE belongs to a class of anaphora processes HS refer to as ‘Surface Anaphora’, which are supposed to disallow exophoric (deictic) uses, to allow an elliptical phrase to contain the antecedent of a pronominal anaphor, and to require syntactic parallelism between the ellipsis target and the ellipsis antecedent. nca, by contrast, belongs to HS’s class of ‘Deep Anaphora’, and hence is supposed to allow exophora, to disallow ‘missing antecedents’, and to allow a looser syntactic parallelism between ellipsis target and antecedent. The data underlying this dichotomy, like so much of the critical syntactic data from the last century, seems much less clear than it once did, as has been pointed out by various scholars (Miller & Pullum Reference Miller, Pullum, Hofmeister and Norcliffe2014), sometimes on the basis of experimental evidence (Kertz Reference Kertz2010, Kim et al. Reference Kim, Kobele, Runner and Hale2011). We think the general conclusion that has been reached is that there is a difference in degree of difficulty between the two classes, rather than a sharp bifurcation, and that we can account for acceptability differences only by appealing to a number of interacting factors.

This could be relevant because if there is no sharp analytic distinction to be drawn between the grammar of PAE and that of NCA, for example, then there is no general ellipsis phenomenon that needs to be restricted to auxiliary verbs. However, there are important data suggesting that a distinction must be drawn. For example, only auxiliaries support pseudogapping as in (52).

Second, Arnold & Borsley (Reference Arnold, Borsley and Müller2010) note that whereas auxiliaries can be stranded in certain non-restrictive relative clauses such as (53), no such possibility is afforded to non-auxiliary verbs as (54) illustrates.

It is possible that these and other ellipsis contrasts are due to factors that are independent of the auxiliary verb class, but we believe that an aux-sensitive ellipsis process remains, as otherwise it is unclear how to account for (55).

For this reason, we continue to treat PAE as an aux-related matter, although we are aware of the problems involved with the criteria proposed in the literature to motivate the deep/surface distinction. The analysis of PAE involves a single derivational (i.e. lexeme-to-lexeme) construction which removes the complement from the val list of the auxiliary verb as seen in (56).

The daughter lexeme in (56) is required to have (at least) two elements in its val list, the last of which, $X$ , is not present in the mother’s non-empty val list. The mother’s semantics is the result of applying the daughter’s sem ( $Z$ ) to the variable V $^{\prime }$ . The value of the latter variable is assigned a value in context, on the basis of the meaning of a salient utterance (sal-utt) in the context of utterance, subject to general principles of the theory of ellipsis, worked out in Ginzburg & Sag (Reference Ginzburg and Sag2000), Culicover & Jackendoff (Reference Culicover and Jackendoff2005), Jacobson (Reference Jacobson and Johnson2008), Sag & Nykiel (Reference Sag, Nykiel and Müller2011) and Miller & Pullum (Reference Miller, Pullum, Hofmeister and Norcliffe2014). So on the basis of the PAE Construction in (56) and the existence of a lexeme like (57), the grammar licenses both aux + and aux– lexemes, since aux is underspecified in the mother node. This derivation is illustrated in Figure 12.

The lexeme-to-word Zero Inflection construction discussed above in (26) can then apply to the lexeme in the mother node of Figure 12 and give rise to the word at the bottom of the tree in Figure 13, a phrasal construction licensed by the Predicational Head–Complement Construction in (24).

Figure 12 Derivation of the PAE use of can.

Figure 13 Analysis of Kim can.

Given that the PAE Construction allows can to be aux +, it also follows that the verb can appear in Aux-Initial Constructions instead, as licensed by the phrasal construction in (39) and illustrated in Figure 14.

Figure 14 Analysis of Can Kim?

Transformational approaches developed in a rich tradition originating with Chomsky (Reference Chomsky1955), have assumed that the auxiliary do is transformationally inserted when a tense element – pressed into service as an independent syntactic atom – is ‘stranded’ by the application of transformational rules. Such rule applications are involved in the analysis of all the NICER properties, and hence do appears in precisely those environments.Footnote [37] It is interesting, therefore, to consider the distribution of do in British English and related varieties. The following examples are discussed by Miller (Reference Miller and Müller2013) and references cited.

These forms of do appear only in the context of PAE, as the ungrammatical examples in (63) demonstrate:

The transformational analysis provides no obvious way of generalizing to these examples of ‘unAmerican non-finite do’: Under reasonable assumptions about the grammar of do and tense, they simply have no interaction here. Hence do-support will have nothing to do with the analysis of (58)–(62). But these examples, as Miller argues, involve non-finite forms of the same auxiliary do that in other varieties exhibits only finite forms. A lexical analysis of do (e.g. where the non-finite form is listed in the lexicon already in PAE form) is preferable because it provides the basis for a natural account in terms of variations in lexical form, an independently well-established kind of dialectal variation. The non-finite forms of auxiliary do also show that there are lexical forms in certain varieties of English which require PAE. Other forms, e.g. being (in all varieties, as far as we are aware) are systematic exceptions to PAE:Footnote [38]

6 Negation

Ever since Klima (Reference Klima, Fodor and Katz1964), it has been known that there is a distinction to be drawn between constituent negation and what Klima refers to as ‘sentential’ negation. It is important to understand that this is fundamentally a syntactic distinction. Sentential negation, for example, cannot be equated with ‘denial’, ‘discourse negation’, and the like, as it sometimes is.Footnote [39]

6.1 Constituency

Constituent negation involves structures like those in (65a), while sentential negation leads us, we argue, to structures like (65b):

The assumption that both types of negation exist leads us to the conclusion that a sentence like (66) is ambiguous (indicated by ‘&’).

In addition, it is predicted that the two types may co-occur within a single sentence, as illustrated in (67):

Thus for any given example involving post-auxiliary not, it is not clear in advance whether not is instantiating constituent or sentential negation.

6.2 VP Constituent Negation

When not negates an embedded constituent, it behaves much like the negative adverb never (see Baker (Reference Baker1989)):

Kim & Sag (Reference Kim and Sag2002) account for these and other properties of constituent negation by regarding not as an adverb that modifies non-finite VPs, rather than as a head of its own functional projection, as is often assumed in Movement-based discussions. On their analysis, modifiers of this kind precede the elements they modify, thus accounting for the contrasts between (69a, b) and (70a, b):Footnote [40]

And not’s lexical entry includes a constraint ensuring that the VP it modifies is non-finite:

Syntactic evidence exists to confirm the indicated constituency in most cases, e.g. the possibility of it-clefts and wh-clefts with the negated VPs as focus:Footnote [41]

It is an important semantic fact that the scope of a modifier adjoined to a VP always includes that VP, as illustrated in (75):

Moreover, VP-adjoined modifiers can never outscope a higher verb. This entails that in examples like (76a–d), the finite verb always outscopes the adverb:

The lexical entry for not (like that of any scopal modifier) thus includes the information that the VP it modifies is its semantic argument.

Finally, note that the constituent modifier analysis of not, forced by the presence of And so can we (Klima (Reference Klima, Fodor and Katz1964); Horn (Reference Horn1989)), predicts the existence of ambiguities like (77a, b):

It also correctly predicts the lack of ambiguity in examples like (78):

In non-‘challenge’ uses, the polarity of the tag must be opposite to that of the sentence to which it is adjoined. The auxiliary in the tag here is negative, indicating that the sentence it is adjoined to is positive, even though it contains not. This is the general pattern predicted by the constituent negation analysis, again following Klima (Reference Klima, Fodor and Katz1964) and Horn (Reference Horn1989).

6.3 Sentential Negation

Sentential negation involves an auxiliary verb and the adverb not:

Di Sciullo & Williams (Reference Di Sciullo and Williams1987) have suggested that sentential negation should be analyzed (as the exceptional orthography for cannot suggests) in terms of a morphological combination of a finite verb with not. Bresnan (Reference Bresnan, Dekkers, van der Leeuw and van de Weijer2000) proposes that not is a modifier adjoined to the finite verb, as in (80):

But both these analyses seem inconsistent with examples like the following:

Since the adverbs in these examples can outscope the preceding auxiliary (e.g. obviously and not can outscope will in (81a)), they are unlike the VP-modifiers discussed above. Moreover, the fact that (non-challenge) tag questions like the one below are formed with a positive auxiliary further suggest that this use of not must have wide scope:

We regard the occurrences of not in (81a–c) as instances of sentential negation. The evidence speaks strongly against the morphological incorporation analysis, as it would require, for example, that sequences like will-obviously-not be treated as a single word, an unintuitive consequence lacking independent motivation. Moreover, if the negation not were to form a morphological unit with the preceding finite auxiliary, then one would expect, contrary to fact in most varieties, that not should appear in inverted structures along with the verb, as illustrated in (83):

These inversions, though historically attested and still acceptable as formal variants in certain British dialects (v. Warner (Reference Warner2000)), are unacceptable in American varieties, where the only possible renditions of (83b, d) are (84a, b):

These data present no particular descriptive challenge. In fact, since sign descriptions may include information about context (usually via the feature cntxt), it is even possible to describe a system where examples like (83) are part of a ‘formal register’, assuming an appropriate theory of such has been integrated into the theory of context.

Let us return to the adverb scope dilemma raised by (81). The verb modifier analysis assigns these examples a structure like (85):

But this predicts the wrong scope, namely (86b), instead of the observed (86a):

The syntactic verb modifier analysis thus also appears inadequate on more than one count. On the basis of diverse evidence, Kim & Sag (Reference Kim and Sag2002) argue that sentential negation should be distinguished from constituent VP-negation in terms of the structural contrast illustrated in (65) above. In fact, Kim and Sag argue that the negative adverb in sentential negation is selected by the verb as a complement. The argument that they make for this analysis includes: (a) evidence for a ‘flat’ structure where not is the sister of the finite verb it co-occurs with; (b) the uniform ordering of sentential negation in complement position; (c) the impossibility of iterating the negative adverb in sentential constructions; (d) the lexically idiosyncratic nature of the scope of sentential negation; (e) the possibility of stranding not only in finite instances of VP ellipsis; and (f) the requirements of a systematic account of ‘polarized’ finite auxiliaries. All of these phenomena (in addition to others involving French pas) are naturally accounted for if finite verbs in English (and French) are allowed to select a negative adverb complement.

Let us consider some of this evidence in more detail. The interaction of sentential negation and inversion, as we have just seen, argues against a [V $_{\text{fin}}$ not] structure and the adverbial interpretation facts just considered speak against a [V $_{\text{fin}}$ [not VP]] structure (for sentential negation). Another argument against the latter constituency is provided by contrasts like the following, which show that in VP-fronting constructions, finite negation remains unfronted:

All of these data are consistent with a flat structure for sentential negation, i.e. the structure in (65b), which would follow straightforwardly from a verb-complement analysis, as would the position of sentential not (English complements uniformly appear in post-head position) and the impossibility of iterating not in sentential negation.

The scopal idiosyncrasies of auxiliary negation are intriguing. We have already established that constituent negation always takes narrow scope with respect to a finite auxiliary, as in (88a–c):

But the scope of sentential negation varies with the choice of auxiliary. For example, not outscopes can or will, but is outscoped by deontic modals like may or must:

These contrasts also show themselves in the interaction of modals with other negative adverbs, e.g. never, and also with positive adverbs that are permitted post-auxiliarly:

Though the interaction of modals and post-modal adverbs is fixed, it seems that modals exhibit variable scope if negation is introduced nominally. The following are ambiguous.

This last pattern is the more familiar one, as we expect in general to find scopal ambiguity. The modal–adverb interactions are only partly predictable on semantic grounds. In an important study, Warner (Reference Warner2000) discusses in detail the following verb classes defined in terms of their scope properties:

Given the partly arbitrary classification of the sort Warner observes, the motivation for a lexical analysis of the sort he proposes is clear. A modal can select for a negative adverb complement and assign it a fixed scope. By contrast, negative nominals involve no idiosyncratic selection and take scope in accordance with general interpretative principles.

In sum, there is considerable, but often subtle evidence in favor of a two-way distinction between non-finite constituent negation and an analysis of sentential negation in terms of an adverb-selecting valence pattern for finite auxiliary verbs. We analyze sentential negation as follows. First, we group the lexemes represented in (95a) together with the various be and have lexemes into the class aux1-v-lxm and Warner’s class in (95b) becomes the class aux2-v-lxm. These are referenced by the following two derivational constructions:Footnote [42]

Let us discuss (96) first. This derivational construction licenses a new lexeme whose val list extends that of any aux1-v-lxm listed above by adding a negative adverb (not). In addition, the semantics of the auxiliary is adjusted via a combinator (in a spirit similar to that of Gazdar et al. (Reference Gazdar, Klein, Pullum and Sag1985); see also Klein & Sag (Reference Klein and Sag1985)) that will allow the semantics of the mother (an auxiliary verb) to take its arguments in multiple orders. More specifically,  allows an auxiliary verb (of semantic type $\langle \text{VP},\text{VP}\rangle$ ) two combinatoric options: either it takes NOT first and then its VP argument, or in the opposite order, if SAI occurs. In either case, NOT scopes over the verb:

The variables  range over argument meanings and the meanings of modifiers like not, that is, functions from VP meanings to VP meanings, where VPs map NP meanings to propositions (n.b. not individuals to truth values).  can be instantiated either as  or as , allowing both argument orders, depending on whether SAI took place.

Now, if the lexeme licensed by (97) is fed to the lexeme-to-word Zero Inflection Construction in (26), then we obtain a sign of type word that produces the analysis sketched in Figure 15. Here, an ordinary predicative head–complement construct is licensed by the construction in (24) and the auxiliary verb derived in the fashion just sketched.

Figure 15 Analysis of will not go.

Conversely, if the lexeme licensed by (97) is instead fed to the PAE Construction in (56) before being fed to the Zero Inflection Construction in (26), then the VP complement is not present in val and we obtain VPs like will not. This VP is identical to the one in Figure 15, except that there are only two daughters, namely will and not. Finally, if the auxiliary is used in the Aux-Initial Construction in (39) instead of the Predicational Head–Complement Construction then we obtain the inversion counterparts of the aforementioned structures.

The analysis of narrow-scope not is similar, given that the combinator in (97) is defined as in (99).Footnote [43] This definition ensures that the auxiliary outscopes the negation, as in Figure 16. The interaction with the lexeme-to-word Zero Inflection Construction in (26) and with the PAE and Inversion constructions is identical to that of (96).

Figure 16 Analysis of must not go.

Note that nothing in the present account prevents the iteration of sentential negation. Thus, rule (97a) can apply to its own output to yield sentences like *You need not not not exercise, and similarly for (97b). However, the oddness of negation iteration is likely the product of two factors, namely prosodic phrasing and semantic complexity. Sentences that contain both sentential negation and VP constituent negation typically require the clausal negation to contract with the auxiliary verb, or for one of the negations be realized with stress. This is illustrated in (100), where ‘ $|$ ’ indicates a prosodic break.

Following Zwicky (Reference Zwicky and Zwicky1986), we propose that the unstressed clausal negation must prosodically ‘lean’ (by contracting) on its respective verbal head, analogously to unstressed pronouns. Alone, the negation cannot project its own phonological phrase. Similarly, the unstressed VP constituent negation must become part of the phonological phrase projected by the VP complement. The second factor that conspires against negation iteration is that sentences with multiple negation are not trivial to process. For example, sentences like Didn’t Robin not stay?, or He would never not tell me the truth are grammatical, but difficult to understand. It is known that the presence of negation independently leads to higher error rates (Wason Reference Wason1961), longer response times (Slobin Reference Slobin1966), greater cortical activation (Carpenter et al. Reference Carpenter, Just, Keller, Eddy and Thulborn1999) and increased brain responses (Staab Reference Staab2007) in comparison with affirmative counterpart sentences. Thus, *You need not not not exercise is odd arguably because there is no suitable phonological phrasing and because multiple negations are excessively difficult to process.

A second instance where negation interacts with prosody has to do with inversion. Whereas examples like (101a) are not as good as the contracted counterpart in (101b), the presence of a longer subject improves the acceptability of such constructions, as (101c, d) show. Such constructions are formal and somewhat archaic, and rather like the Heavy NP Shift construction in that the subject NP has to be fairly hefty. Again, this appears to be a matter of phonological phrasing, since the unstressed negation can either contract or prosodify with the phonological phrase projected by the non-pronominal NP.

In summary, our analysis of sentential negation treats not as a complement of the finite auxiliary verb. Therefore, not is ordered after the finite verb. In sentential negation, not does not form a constituent with the following VP and hence never ‘fronts’ with the material following it. Not participates in lexical idiosyncrasy (scope variation) only with finite auxiliaries. Exceptional cases can be easily accommodated by the present theory. For example, negative verb forms like need not in (102) are not obtained via any lexical rule, given that they lack a non-negative counterpart (e.g. *You need bother with that). These forms are simply listed as lexemes, by assuming a grammaticized form of the verb.Footnote [44]

No extant movement-based account covers the range of phenomena discussed so far. As Lasnik et al. (Reference Lasnik, Depiante and Stepanov2000: 181–190) stresses, the Minimalist analysis articulated in Chomsky (Reference Chomsky, Hale and Keyser1993) fails to deal even with the ungrammaticality of even simple examples like *John left not or *John not left, and the account in Lasnik et al. (Reference Lasnik, Depiante and Stepanov2000) does not analyze modal/negation scope variation or inversion idiosyncrasies.

7 Contraction

Let us now turn to the not-contracted forms, all of which are finite. As illustrated below, these exhibit further lexical idiosyncrasy; See Zwicky & Pullum (Reference Zwicky and Pullum1983) and Quirk et al. (Reference Quirk, Greenbaum, Leech and Svartvik1985: 3.23; 3.39).

Moreover, there are irregularities in the varieties that allow the forms shown in (104). We use the symbol % to indicate a form that is well formed only in certain varieties and $\dagger$ to indicate an erstwhile dialectal form that now seems to be obsolescent.

Other idiosyncratic contracted forms include the following:

There are also gaps in the contraction paradigm, at least in standard varieties of English (See Hudson (Reference Hudson2000b)):

In addition, as first noted by Horn (Reference Horn1972) (see also Zwicky & Pullum (Reference Zwicky and Pullum1983)), contracted forms exhibit scope idiosyncrasy of the sort we have already been considering. For example, not must outscope the modal in the interpretation of won’t (in either its volitional or futurate uses):

By contrast, the contracted forms of deontic modals like should exhibit the opposite scope interpretation:

In sum, the phonological and semantic idiosyncrasies documented by Horn, Zwicky, Pullum and others clearly point to a lexical analysis of not-contraction, i.e. one that rejects contraction as a phonological rule of the sort proffered in some generative textbooksFootnote [45] and accepted uncritically in much of the generative literature.

Furthermore, as already noted in Section 2, there are inflected forms that only occur in inversion constructions, e.g. the first-person singular negative contracted form of the copula illustrated in (109):Footnote [46]

This is analyzed in terms of a distinct listeme that licenses the Inverted Copular Contraction Construction shown in (110):

And this gives licenses words like (111), which can head aux-initial clauses like the ones shown in (112). The feature pred  indicates phrases that can appear in predicative position, such as certain VPs, PPs, APs, and NPs.Footnote [47]

Note first-person aren’t is correctly blocked in [inv $-$ ] environments like (113).

Finally, Bresnan (Reference Bresnan, Dekkers, van der Leeuw and van de Weijer2000) observes that in many varieties of American English, inversion and overt sentential negation are incompatible:

The only corresponding sentences in these varieties (which include most spoken American vernacularsFootnote [48] ) are those shown in (115):

Since, as we have seen, finite negation is plausibly treated via complement selection, these data too are plausibly described in lexical terms. In a variety/register where examples like (114) are ill-formed, the negation constructions introduced in the preceding section could be revised to require that the mother lexeme be [inv $-$ ]. That said, there are example pairs which are more readily interpretable with sentential negation:Footnote [49]

Leaving variational concerns aside, our analysis of contraction follows a tradition in which inversion constructions are distinguished by the feature specification [inv  $+$ ]. The various quirks are then analyzed in terms of positive or negative lexical specifications for this feature, as we have seen. These exceptions work in tandem with the following postlexical (i.e. post-inflectional) construction. In short, the adverb is suppressed from val in the mother node, and the negation semantics is added.

Postlexical constructions require both mother and daughter to be of type word. Note that the daughter in (118) is in addition required to be [aux +] (ensuring that only auxiliary verbs can undergo not-contraction). By contrast, the mother is free to be either [aux +] or [aux $-$ ] (bool (ean) is the immediate supertype of $+$ and $-$ ), and hence is free to appear either in the nicer environments, or else in non-auxiliary contexts like phrases licensed by the Subject–Predicate Construction. In this way, the examples in both (119) and (120) are correctly licensed:

And a last point: the semantics of a contracted verb is just the same as the result of semantically composing the uncontracted verb with not. In particular, the analysis of the scope interaction of negation and modality is carried over in its entirety. The semantics of mustn’t is the same as that of must not, that of won’t mirrors that of will not, etc. There may be differences in the communicative potential of such pairs of course, but these are presumably more general matters of lexical variation, not part of the grammar of contraction.

PAE interacts with other aspects of our analysis so as to produce the feeding and blocking relations that are needed to generate the complex facts of EAS. First, note that the negation constructions feed PAE, by which we mean, for example, that The Wide Negation Construction licenses unary constructs whose mother is a verbal lexeme suitable to be the daughter of a construct licensed by PAE. And the mother of this last construct in turn is a verbal lexeme that can be the daughter of an inflectional construct, let’s say one whose mother will be a third-singular word, as sketched in (121):

And since this inflectional construction feeds contraction, we produce a construct that licenses a word like the following:

We thus assign the same semantics to Dominique cannot. and Dominique can’t, as desired.

The grammar also licenses a word just like the one in (122), but with positive specifications for aux and inv (since can is an auxiliary verb, nothing forces the negative specifications). This allows contracted forms to appear in aux-initial constructs, whether their complements have undergone PAE or not, as in (123):

Finally, note that PAE preserves the scope restrictions that are controlled by the negation constructions. That is, examples like (124a, b) are scopally restricted as indicated:

This follows because the elliptical sentences are constructed according to the same scope constraints as nonelliptical sentences.

8 Rebuttal

The last auxiliary-sensitive phenomenon to be discussed here is what we’ve called ‘rebuttal’, since it allows speakers to counter some point that has been made by the addressee in the salient discourse context. We will assume that the following expressions are functionally equivalent in American English:

In addition to conveying the proposition that some contextually appropriate group including the speaker agrees to attend some convention that is under discussion, someone who utters (125a–c) also conveys that in so doing (s)he is somehow rebutting a claim that has just been made. The most likely choice for this claim perhaps is some interlocutor’s assertion or suggestion in the immediately prior discourse that the aforementioned group of individuals would not attend the convention. Perhaps so and too require a more direct connection between the rebuttal and the rebutted claim than mere focal accent does. Nonetheless, we will treat all three of the rebuttal mechanisms in (125) as variants of a single phenomenon, leaving it to subsequent research to provide a more subtle treatment of the pragmatic differences.Footnote [50]

As noted by Jackendoff (Reference Jackendoff1972), the potential for rebuttal (‘reaffirmation’, in his terms) is a special property of accented auxiliaries, one that distinguishes them from accented non-auxiliary verbs. The rebuttal particles are always accented and combine only with auxiliary verbs:

The analysis is based on a single postlexical construction, as sketched in (128), which yields a finite, uninvertible, unembeddableFootnote [51] verb underspecified for aux:

Put informally, the rebuttal form is achieved strictly by focal accent if the verb is contracted, and achieved via too, so or focal accent otherwise. Here ‘ $\bullet$ ’ is a composition operator functioning as ‘expressive glue’ (Potts Reference Potts2003). Making reference to elements of the context of utterance, it introduces the act of rebuttal in an independent dimension – a parallel channel of communication. As intended, [inv $-$ ] ensures that the rebutting verb projects a non-inverted clause:

In the simplest case, this analysis will license words like the following:

These words can be used to rebut the latest move in the cntxt value (on the ‘Dialogue-Game-Board’ in the sense of Ginzburg (Reference Ginzburg2012)) when that is consistent with asserting the content of the sentence the verb projects.

As a final, subtle prediction of our analysis of Rebuttal, consider the following data set discussed by Embick & Noyer (Reference Embick and Noyer2001):

Embick and Noyer sketch the beginnings of an analysis based on the idea that putative ineffability here is created by constituent negation, which suffices to block affix lowering, but is not enough to trigger the transformation of Do-Support.

However, these facts follow to the letter from independently motivated aspects of our analysis. For example, (131a) is ungrammatical because not illegally precedes the (indicative) finite form do. One might think that do is a non-finite form, but then the sentence would be ill-formed because it lacked a finite verbal head. The unaccented auxiliary do in (131b) must be [aux $-$ ] because of the Subject–Predicate Construction, but the lexeme for auxiliary do is lexically specified as [aux +], which is incompatible with such a requirement. Finally, the verb heading (131c) is the rebuttal form of auxiliary do, which, as we have just seen, can be [aux $-$ ]. Its VP complement is always not do that, whose head is the non-auxiliary verb do. All is as it should be in (131).

As in other SAI phenomena, the present account can handle subtle exceptional cases. For example, for some speakers, the negated permissive may form is exceptional in that it necessarily has rebuttal force:

This can be accounted for if the form may not is explicitly listed in the lexicon with rebuttal force, rather than being derived through negation and rebuttal rules.

9 Conclusion

The English auxiliary system exhibits many lexical exceptions and subregularities, and considerable dialectal variation. The idiosyncrasies range from families of similar, but semantically distinct inversion constructions, to auxiliary verbs with peculiar constructional distributions and distinct interactions with negation. Such idiosyncrasies, commonly omitted from generative analyses and discussions, as well as the general principles that govern both aux-related and non-aux constructions, can be accommodated within constructional grammar. The analysis of the English auxiliary system sketched in the present work involves a small inventory of language-particular constructions, appropriately constrained, which express the well-known generalizations about auxiliary constructions, as well as the often-ignored cases of lexical idiosyncrasy, including the distribution of the auxiliary do. In particular, under these constructional assumptions, the auxiliary verb do is readily analyzed without appeal to a Do-Support transformation, or to nonmonotonic principles of optimization.

In the present theory clauses are required to be headed by a finite verb, which may be an auxiliary verb or a non-auxiliary verb. Auxiliaries precede any lexical verbs because some auxiliaries have only finite lexical forms and hence must precede all other verbs they subcategorize for, and the strict ordering of auxiliary elements follows from semantic constraints and/or feature incompatibilities. Auxiliaries determine the form of the following verb, and thus such constraints are enforced by lexical selection, without anything like affix-hopping. Moreover, auxiliary-initial clauses do not require anything like head movement. Rather, they involve a different (post-verbal) realization of the subject. Unstressed auxiliary do is restricted to [aux +] environments, but when it interacts with NICER constructions it can appear in a wider range of environments, like other auxiliaries.

The present account, compatible in principle with any constraint-based grammatical framework, is cast in terms of a constructional theory of grammar, drawing from a variety of construction-based approaches to grammar. The most conspicuous influences are Head-Driven Phrase Structure Grammar and (Berkeley) Construction Grammar. In addition to its superior treatment of EAS, the present account contains no movement operations, and therefore fits well into a minimalist philosophy of grammar.

APPENDIX. English Auxiliary Grammar

The type hierarchy (partial)

The construct type hierarchy (partial)

Grammar Signature: Some Type Declarations

Some lexical-class constructions (verbs)

Some Auxiliary Listemes

Some Lexical Combinatoric Constructions

Some Phrasal Combinatoric Constructions

Footnotes

The primary author of this paper, Ivan Sag, worked on aspects of the auxiliary system of English throughout his career in linguistics. He left a version of this comprehensive overview unfinished when he died in September 2013. His wish was that it should be finished and published as a co-authored journal article. The task of completion proved remarkably complex, and ultimately brought together a large cooperative team of his colleagues and friends – an outcome that would have greatly pleased him. A surprisingly large number of detailed problems had to be resolved by people well acquainted both with classical HPSG and the sign-based construction grammar (SBCG) that Ivan (with others) was developing over the last decade of his life. The order of names on the by-line of this paper reflects the various contributions to the work only imperfectly. The overall framework and content of the paper are entirely due to Sag; the vast majority of the rewriting was done by Chaves, who was in charge of the typescript throughout, and the other authors contributed by email in various ways to resolving the many problems that came up during the revision and refereeing. Ivan Sag acknowledged the financial support of the Andrew W. Mellon Foundation; the Alfred P. Sloan Foundation; The System Development Foundation; Stanford’s Center for the Study of Language and Information; Das Bundesministerium für Bildung, Wissenschaft, Forschung, und Technologie (Project Verbmobil); grant no. IRI-9612682 from the National Science Foundation; and grant no. 2000-5633 from the William and Flora Hewlett Foundation to the Center for Advanced Study in the Behavioral Sciences. He expressed thanks to three anonymous JL reviewers of the first version he submitted to this journal, and to discussants including Farrell Ackerman, Emily Bender, Rajesh Bhatt, Bob Borsley, Joan Bresnan, Alex Clark, Ann Copestake, Bruno Estigarribia, Hana Filip, Chuck Fillmore, Dan Flickinger, Gerald Gazdar, Jonathan Ginzburg, Jane Grimshaw, Paul Hirschbühler, Dick Hudson, Paul Kay, Jongbok Kim, Paul Kiparsky, Tibor Kiss, Shalom Lappin, Bob Levine, Sally McConnell-Ginet, David Pesetsky, Carl Pollard, Eric Potsdam, Geoff Pullum, Peter Sells, Anthony Warner, Tom Wasow, and Arnold Zwicky. The co-authors of the present version wish to note the valuable assistance they had from Bob Borsley, Danièle Godard, and especially Bob Levine, plus three further JL reviewers of the final version.

1 This mnemonic was coined by Huddleston (Reference Huddleston1976). For a slightly different classification, and an overview of the grammatically relevant properties of auxiliaries, see Huddleston, Pullum et al. (Reference Huddleston and Pullum2002: 90–115). Following Miller & Pullum (Reference Miller, Pullum, Hofmeister and Norcliffe2013), we use the term Post-Auxiliary Ellipsis (PAE) rather than the more familiar ‘VP Ellipsis’ because, as Hankamer (Reference Hankamer1978: 66n) notes, it is neither necessary nor sufficient that it should involve ellipsis of a VP. We also take the position that infinitival to is an auxiliary verb, as assumed by GPSG and argued for by Levine (Reference Levine2012).

2 en is the perfect participle suffix, ing the present participle suffix, and s the 3rd-singular-indicative present verbal inflection.

3 Indeed, the widespread acceptance of Chomsky’s analysis of EAS, extolled by Lees (Reference Lees1957) as the first truly ‘scientific’ analysis of a significant syntactic problem, had immediate and profound consequences for the field. But as Gazdar et al. (Reference Gazdar, Pullum and Sag1982: 613–616) showed, Chomsky’s analysis suffered from a host of problems, including various ordering paradoxes and counterintuitive constituency claims.

4 See, for example, Kim (Reference Kim2000).

5 In order for this analysis to succeed, it would of course also have to explain why (i) does not preempt (5b):

6 See Walter & Jaeger (Reference Walter and Jaeger2005) and Jaeger & Wasow (Reference Jaeger and Wasow2005) for further discussion.

7 Note, however, that there is considerable dialectal variation regarding shall (Nunberg Reference Nunberg2001).

8 In Gazdar et al. (Reference Gazdar, Pullum and Sag1982) and other accounts, such effects were controlled by aux instead, but in the present account aux is not strictly lexically specified.

9 The feature i(ndependent)-c(lause) from Ginzburg & Sag (Reference Ginzburg and Sag2000) is used to ensure that the phrase licensed by this verb cannot function as a subordinate clause, except in those environments where ‘main clause phenomena’ are permitted.

13 Feature-based phrase structure grammars like sbcg can be associated with weights (Brew Reference Brew1995, Briscoe & Copestake Reference Briscoe and Copestake1999, Linadarki Reference Linadarki2006, Miyao & Tsujii Reference Miyao and Tsujii2008) and integrated into psycholinguistic models where the effects of frequency, priming, and inhibition can be taken into account (Konieczny Reference Konieczny1996).

14 Whereas the feature val characterizes the overt syntactic realization of the valents (i.e. whether they are local or extracted, elliptical or not), the feature arg(ument)-str(ucture) is responsible for establishing the size and type of valence frame of any given word, regardless of their overt realization. However, for our purposes all constraints are stated over val, and arg(ument)-str(ucture) is not shown, since nothing hinges on this distinction here.

15 Falk (Reference Falk, Butt and King2003) argues that auxiliaries like dare not are not a raising verb either, but attestations like (i) and (ii) indicate otherwise. Regardless, the present account can be revised by introducing new supertype covering subject-raising, control and equative verbs, allowing for some auxiliary verb to be raising and others control.

  1. (i) It has been said of the precautionary principle that its underlying principle is: There dare not be even a risk of a risk. Almost every human activity – from (…)[17-10-16 USWashington Examiner ABC]

  2. (ii) Cleaning a meth house can be expensive $3,000 to $4,000 or more, and there dare not be a speck of the drug left behind. (http://www.methproject.org/action/details/news-story-2014-04-12.html)

16 For a more detailed discussion, see Sag (Reference Sag2012).

17 Variables such as X range over feature structures in the constructions and constraints we formulate.

18 We indicate via $\uparrow$ the names of immediately superordinate types, which provide constructional constraints of immediate relevance. This is purely for the reader’s convenience, as this information follows from the type hierarchy specified in the grammar signature. See the Appendix for a summary of the type hierarchy and the relevant constructions. In the case of (20), the annotation $\uparrow$ inflectional-cxt indicates that (20) is a subtype of inflectional construction, which imposes further constraints on mtr and dtrs. This in turn means that the types lexeme and word can in fact be omitted from (20), as they are one of the general requirements imposed by the inflectional-cxt superordinate.

19 We thus provide a natural way of expressing linguistically natural constraints requiring that two elements must be identical in all but a few specifiable respects. Note that this is a purely monotonic use of default constraints, akin to the category restriction operation introduced by Gazdar et al. (Reference Gazdar, Klein, Pullum and Sag1985). Constructions of this kind are equivalent to what some people refer to as ‘lexical rules’, notably in terms of their interactions. For discussion, see Müller (Reference Müller2006, Reference Müller and Müller2007), Sag (Reference Sag2012), and Müller & Wechsler (Reference Müller and Wechsler2014).

20 The informal representation in (22) is due to Chuck Fillmore. According to this scheme, a daughter is represented simply by enclosing its word sequence in square brackets; a construct is indicated by enclosing its sequence of daughters in curly braces.

21 For a more streamlined version of this and other constructions see Sag (Reference Sag2012).

22 Here and throughout, boxed numbers or letters (‘tags’ in the terminology of Shieber (Reference Shieber1986)) are used to indicate pieces of a feature structure that are equated by some grammatical constraint. However, the linguistic models assumed here are simply functions, rather than the reentrant graphs that are commonly used within hpsg. For an accessible introduction to the tools employed here, see Sag et al. (Reference Sag, Wasow and Bender2003).

23 See Sag (Reference Sag2012) for more details. At stake is a complex set of issues that have motivated the ID–LP format (the separation of constructions and the principles that order their daughters) and ‘Linearization Theory’, the augmentation of sign-based grammar to allow interleaving of daughters as an account of word order freedom. On ID–LP grammars, see Gazdar & Pullum (Reference Gazdar, Pullum, Moortgat, van der Hulst and Hoekstra1981), Gazdar et al. (Reference Gazdar, Klein, Pullum and Sag1985), and Pollard & Sag (Reference Pollard and Sag1987), among others. On Linearization Theory, see Reape (Reference Reape, Nerbonne, Netter and Pollard1994), and Müller (Reference Müller, T’sou and Yeung Lai1995, Reference Müller1999, Reference Müller and Reitter2002, Reference Müller and Müller2004), Donohue & Sag (Reference Donohue and Sag1999), Kathol (Reference Kathol2000).

24 Some abbreviations: cl for clause and comp for complement.

25 Early work in Head-Driven Phrase Structure Grammar (HPSG) such as Flickinger et al. (Reference Flickinger, Pollard and Wasow1985), Flickinger (Reference Flickinger1987), and Pollard & Sag (Reference Pollard and Sag1987) adapted multiple inheritance hierarchies, already used in computational work in knowledge representation and object-oriented programming, to express cross-classifying generalizations about words. This same general approach has subsequently been applied in various ways to the grammar of phrases by other linguists. Notable examples of such work are Hudson’s (Reference Hudson1990, Reference Hudson2000a) Word Grammar, the construction-based variety of hpsg developed in Sag (Reference Sag1997) and Ginzburg & Sag (Reference Ginzburg and Sag2000), and the variety of cxg emanating from Berkeley, beginning in the mid 1980s (see Fillmore et al. (Reference Fillmore, Paul and O’Connor1988), Fillmore (Reference Fillmore1999), Kay & Fillmore (Reference Kay and Fillmore1999), and Goldberg (Reference Goldberg1995)); see also Zwicky (Reference Zwicky1994), Kathol’s (Reference Kathol1995, Reference Kathol2000) analysis of German clause types, as well as the proposals made in Culicover & Jackendoff (Reference Culicover and Jackendoff2005). In all of these traditions, generalizations about constructions are expressed through the interaction of a hierarchy of types and the type-based inheritance of grammatical constraints.

26 An analogous rule handles head–complement constructions in which there is no subject, such as non-predicative prepositions, nouns, and adjectives and their complements. Following Bender & Flickinger (Reference Bender and Flickinger1999) and Sag (Reference Sag2012), one can distinguish which valent if any is the subject via the feature XARG. We have omitted this feature from this work for ease of exposition.

27 The modal can is an auxiliary verb of type aux1-v-lxm, which as we shall see, will cause the modal to have narrow scope when negated. Other auxiliaries, such as epistemic may, are of type aux2-v-lxm, which forces the modal to have wide scope when negated.

28 Auxiliary verb forms of standard American English are few in number, and are all finite verbal forms, except for (base form) be, being, been, (base form) have, having, and infinitival to.

29 For discussion, see Lasnik et al. (Reference Lasnik, Depiante and Stepanov2000).

30 We will use the symbol ‘£’ to mark a sentence that is unacceptable in the U.S., but generally acceptable in the U.K. and the British Commonwealth.

31 For further discussion of these varieties, see Section 5.

32 We assume that the construction responsible for imperative verb uses takes a base form verb and yields zero-inflected finite verbs which do not impose gram- on their complements, thus allowing Don’t have breakfast after 9, Don’t be rude, Do have fun, Do be kind, and so on.

33 In fact, even simple interrogatives like What was her name? can instantiate various kinds of interrogative inquiry, including rhetoricals (in which case the Question Under Discussion is whether the hearer knows the answer or not), self-addressed questions (which have a peculiar intonation and discourse requirements), and standard interrogatives (information requests). Hence, three different clause types that share the same construction form.

34 Note that we are here following Fillmore (Reference Fillmore1999), who argues that there is no general semantics shared by all aux-initial constructions. This is a controversial point; see Goldberg (Reference Goldberg2006, Reference Goldberg2009), Borsley & Newmeyer (Reference Borsley and Newmeyer2009), and the references cited there. We also follow Ross (Reference Ross1967) in assuming that there is no grammatical ban on clausal subjects in SAI, and that such examples are sometimes low in acceptability for performance reasons. At stake are examples like (i) and (ii) which at least some speakers deem acceptable, especially if the subject phrase is separated from the rest of the utterance by prosodic breaks.

  1. (i) Would [whether or not we arrive on time] really make a difference?

  2. (ii) Was [that he lied to you] really disappointing?

35 The positive specification for the feature independent-clause (ic) in Figure 11 ensures that the phrase licensed by this construct cannot function as a subordinate clause, except in those environments where ‘main clause phenomena’ are permitted. See Section 8.

36 An account of Lasnik et al.(2000) incorrectly rules out non-base form VP ellipsis like (50d,e).

37 For a survey of the issues surrounding this tradition of analysis, consult Lasnik et al. (Reference Lasnik, Depiante and Stepanov2000).

38 We follow Warner (Reference Warner2000). These contrasts were first noted by Akmajian & Wasow (Reference Akmajian and Wasow1975).

39 The precise definition of the various subcategories relevant to the analysis of negation is a subtle matter that has been the subject of considerable debate. I follow Kim & Sag (Reference Kim and Sag2002)’s analysis in the main, though we incorporate and adapt further insights from Warner (Reference Warner2000). For a useful discussion of related issues in both analytic and historical terms, see Horn (Reference Horn1989: Ch. 3).

40 Related kinds of negation are those that modify other kinds of phrases, such as ‘not many people’, ‘a not very difficult problem’, etc.

41 Notice, however, that the VP complements of auxiliary verbs do not allow clefting and hence cannot provide further support for the relevant VP structures:

  1. (i) *It’s [(not) go to the party] that they should.

  2. (ii) *What they should is [(not) go to the party].

42 For ease of presentation, we leave certain information redundantly expressed in both negation constructions. In fact, these can be viewed as two constructions with a common superordinate. See the Appendix for details.

43 In this presentation, we have carved out a minimal set that allows 2-argument permutation and nothing more. This is misleading. There is clearly a more general theory that can be developed of combinators for natural language. The beginning of such is sketched by Klein & Sag (Reference Klein and Sag1985) and Gazdar et al. (Reference Gazdar, Klein, Pullum and Sag1985).

44 See Levine (Reference Levine, Csipak, Eckhardt, Liu and Sailer2013) for evidence that uses of the modal need that take an overt complement VP are NPIs. Consequently, I don’t think you need bother with that is licit, as a type of garden variety NPI licensing.

45 E.g. Haegeman (Reference Haegeman1991), Radford (Reference Radford2004).

47 Recall that F ID is the identity function over VP meanings.

48 It may be that inversions like (114) are restricted to formal registers, even in British dialects.

49 This leaves open the possibility that the contrasts in Bresnan (2000) may at least in part not be due to grammar proper.

50 So and too occur almost exclusively in American varieties, it seems. Indeed is the parallel form in British English, though it occurs in American varieties, as well:

51 Recall from Section 2 that the specification [ic  $+$ ] restricts verbs to root clauses.

References

Akmajian, Adrian. 1984. Sentence types and the form-function fit. Natural Language & Linguistic Theory 2, 124.CrossRefGoogle Scholar
Akmajian, Adrian & Wasow, Thomas. 1975. The constituent structure of vp and aux and the position of the verb be. Linguistic Analysis 1, 205245.Google Scholar
Ambridge, Ben, Rowland, Caroline & Pine, Julian. 2008. Is structure dependence an innate constraint? new experimental evidence from children’s complex-question production. Cognitive Science 32, 222255.CrossRefGoogle ScholarPubMed
Arnold, Doug & Borsley, Robert D.. 2010. Auxiliary-stranding relative clauses. In Müller, Stefan (ed.), Proceedings of the HPSG-2010 Conference, 4767. Stanford: CSLI.Google Scholar
Baker, Carl Lee. 1989. English syntax. Cambridge, MA: MIT Press.Google Scholar
Bender, Emily & Flickinger, Daniel P.. 1999. Peripheral constructions and core phenomena: Agreement in tag questions. In Webelhuth, Koenig & Kathol (eds.), 199214.Google Scholar
Berwick, Robert C. & Chomsky, Noam. 2008. ‘Poverty of the Stimulus’ revisited: Recent challenges reconsidered. 30th Annual Meeting of the Cognitive Science Society.Google Scholar
Berwick, Robert C., Pietroski, Paul, Yankama, Beracah & Chomsky, Noam. 2011. Poverty of the stimulus revisited. Cognitive Science 35, 12071242.CrossRefGoogle ScholarPubMed
Binnick, Robert I. 1991. Time and the verb: A guide to tense and aspect. New York & Oxford: Oxford University Press.Google Scholar
Boas, Hans C. & Sag, Ivan A. (eds.). 2012. Sign-based construction grammar. Stanford: CSLI Publications.Google Scholar
Bod, Rens. 2009. From exemplar to grammar: A probabilistic analogy-based model of language learning. Cognitive Science 33.5, 752793.CrossRefGoogle ScholarPubMed
Borsley, Robert(ed.). 2000. The nature and function of syntactic categories. San Diego: Academic Press.Google Scholar
Borsley, Robert D. & Newmeyer, Frederick J.. 2009. On subject-auxiliary inversion and the notion ‘purely formal generalization’. Cognitive Linguistics 20.1, 135143.CrossRefGoogle Scholar
Bresnan, Joan. 2000. Optimal syntax. In Dekkers, Joost, van der Leeuw, Frank & van de Weijer, Jeroen (eds.), Optimality theory: Phonology, syntax and acquisition. Oxford: Oxford University Press.Google Scholar
Bresnan, Joan W. 2001. Lexical-functional syntax. Oxford: Basil Blackwell’s.Google Scholar
Brew, Chris. 1995. Stochastic HPSG. Proceedings of the 7th Conference of the EACL. Dublin.Google Scholar
Briscoe, Ted & Copestake, Ann. 1999. Lexical rules in constraint-based grammar. Computational Linguistics 4.25, 487526.Google Scholar
Carpenter, Patricia A., Just, Marcel Adam, Keller, Timothy A., Eddy, William F. & Thulborn, Keith R.. 1999. Time course of fmri-activation in language and spatial networks during sentence comprehension. NeuroImage 10, 216224.CrossRefGoogle ScholarPubMed
Chomsky, Noam. 1955. The logical structure of linguistic theory. Ms., Society of Fellows, Harvard University. Published in 1975 as The Logical Structure of Linguistic Theoryby Plenum. Now available from the University of Chicago Press, Chicago, Illinois.Google Scholar
Chomsky, Noam. 1956. Three models for the description of language. IRE Transactions in Information Theory 2, 113124.CrossRefGoogle Scholar
Chomsky, Noam. 1981. Lectures on government and binging. Dordrecht: Foris.Google Scholar
Chomsky, Noam. 1993. A minimalist program for linguistic theory. In Hale, Ken & Keyser, Samuel J. (eds.), The view from building 20, 152. Cambridge, MA: MIT Press.Google Scholar
Chomsky, Noam. 2010. Restricting stipulations: Consequences and challenges. Talk given at the Universitt Stuttgart.Google Scholar
Clark, Alexander & Eyraud, Rémi. 2006. Learning auxiliary fronting with grammatical inference. Proceedings of the 10th Conference on Computational Natural Language Learning CoNLL-X ’06, 125132. Stroudsburg, PA, USA: Association for Computational Linguistics.Google Scholar
Clark, Alexander & Lappin, Shalom. 2011. Linguistic nativism and the poverty of the stimulus. Oxford: Wiley-Blackwell.CrossRefGoogle Scholar
Crain, Stephen & Nakayama, Mineharu. 1987. Structure dependence in grammar formation. Language 63.3, 522543.CrossRefGoogle Scholar
Culicover, Peter. 1971. Syntactic and semantic investigations. Ph.D. dissertation, MIT.Google Scholar
Culicover, Peter & Jackendoff, Ray. 2005. Simpler syntax. Oxford: Oxford University Press.CrossRefGoogle Scholar
Di Sciullo, Anna Marie & Williams, Edwin. 1987. On the definition of word. Cambridge, MA: MIT Press.Google Scholar
Donohue, Cathryn & Sag, Ivan A.. 1999. Domains in Warlpiri. Sixth International Conference on HPSG–Abstracts. 04–06 August 1999, 101106. Edinburgh.Google Scholar
Embick, David & Noyer, Rolf. 2001. Movement operations after syntax. Linguistic Inquiry 32.4, 555595.CrossRefGoogle Scholar
Estigarribia, Bruno. 2007. Asking questions: Language variation and language acquisition. Ph.D. dissertation, Stanford University.Google Scholar
Estigarribia, Bruno. 2010. Facilitation by variation: Right-to-left learning of english yes/no questions. Cognitive Science 34.1, 6893.CrossRefGoogle ScholarPubMed
Eynde, Frank Van. 2015. Predicative constructions: From the Fregean to a Montagovian treatment. Stanford, CA: CSLI Publications.Google Scholar
Falk, Yehuda N. 1984. The English auxiliary system: A lexical-functional analysis. Language 60, 483509.CrossRefGoogle Scholar
Falk, Yehuda N. 2003. The English auxiliary system revisited. In Butt, Miriam & King, Tracy Holloway (eds.), Proceedings of the LFG03 Conference, 184204.Google Scholar
Fillmore, Charles J. 1999. Inversion and constructional inheritance. In Webelhuth, Koenig & Kathol (eds.), (Studies in Constraint-Based Lexicalism, chap. 21), 113128.Google Scholar
Fillmore, Charles J., Paul, Kay & O’Connor, Mary C.. 1988. Regularity and idiomaticity in grammatical constructions: The case of let alone. Language 64, 501538.CrossRefGoogle Scholar
Flickinger, Daniel P.1987. Lexical rules in the hierarchical lexicon. Ph.D. dissertation, Stanford University.Google Scholar
Flickinger, Daniel P., Pollard, Carl J. & Wasow, Thomas. 1985. A computational semantics for natural language. Proceedings of the Twenty-Third Annual Meeting of the ACL, 262267. Chicago, IL: ACL.Google Scholar
Fodor, Janet Dean, Bever, Thomas G. & Garrett, Merrill F.. 1974. The psychology of language. New York: McGraw Hill.Google Scholar
Freidin, Robert. 2004. Syntactic structures redux. Syntax 2.7, 101127.CrossRefGoogle Scholar
Gazdar, Gerald, Klein, Ewan, Pullum, Geoffrey K. & Sag, Ivan A.. 1985. Generalized phrase structure grammar. Oxford: Basil Blackwell’s and Cambridge, MA: Harvard University Press.Google Scholar
Gazdar, Gerald & Pullum, Geoffrey K.. 1981. Subcategorization, constituent order and the notion of ‘head’. In Moortgat, Michael, van der Hulst, Harry & Hoekstra, Teun (eds.), The scope of lexical rules, 107123. Dordrecht: Foris.Google Scholar
Gazdar, Gerald, Pullum, Geoffrey K. & Sag, Ivan A.. 1982. Auxiliaries and related phenomena in a restricted theory of grammar. Language 58, 591638.CrossRefGoogle Scholar
Ginzburg, Jonathan. 2012. The interactive stance: Meaning for conversation. Oxford: Oxford University Press.CrossRefGoogle Scholar
Ginzburg, Jonathan & Sag, Ivan A.. 2000. Interrogative investigations: The form, meaning, and use of English interrogatives. Stanford, CA: CSLI Publications.Google Scholar
Goldberg, Adele. 1995. Constructions: A construction grammar approach to argument structure. Chicago: University of Chicago Press.Google Scholar
Goldberg, Adele. 2006. Constructions at work: The nature of generalization in language. Oxford: Oxford University Press.Google Scholar
Goldberg, Adele E. 2009. Constructions work. Cognitive Linguistics 20.1, 201224.CrossRefGoogle Scholar
Grice, Paul H. 1975. Logic and conversation. In Cole, P. & Morgan, J. (eds.), Syntax and Semantics, vol. 3. Academic Press.Google Scholar
Grimshaw, Jane. 1997. Projections, heads, and optimality. Linguistic Inquiry 28, 373422.Google Scholar
Haegeman, Liliane. 1991. Introduction to government and binding theory. Oxford: Blackwell.Google Scholar
Hankamer, Jorge. 1978. On the non-transformational derivations of some null NP anaphors. Linguistic Inquiry 9, 5574.Google Scholar
Hankamer, Jorge & Sag, Ivan A.. 1976. Deep and surface anaphora. Linguistic Inquiry 7, 391426.Google Scholar
Horn, Laurence. 1972. On the semantic properties of logical operators in English, Ph.D. thesis, UCLA, Los Angeles.Google Scholar
Horn, Laurence R. 1989. A natural history of negation. Chicago: University of Chicago Press.Google Scholar
Huddleston, Rodney. 1976. Some theoretical issues in the description of the English verb. Lingua 40, 331383.CrossRefGoogle Scholar
Huddleston, Rodney D. & Pullum, Geoffrey K.. 2002. The Cambridge grammar of the English language. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Hudson, Richard. 1976a. Arguments for a non-transformational grammar. Chicago: University of Chicago Press.Google Scholar
Hudson, Richard. 1976b. Arguments for a non-transformational grammar. University of Chicago Press.Google Scholar
Hudson, Richard. 1977. The power of morphological rules. Lingua 42, 7389.CrossRefGoogle Scholar
Hudson, Richard. 1990. English word grammar. Oxford: Blackwell.Google Scholar
Hudson, Richard. 2000a. Grammar without functional categories. In Borsley(ed.), 735.Google Scholar
Hudson, Richard. 2000b. *I amn’t. Language 76, 297323.Google Scholar
Jackendoff, Ray. 1972. Semantic interpretation in generative grammar. Cambridge, MA: MIT Press.Google Scholar
Jacobson, Pauline. 2008. Direct compositionality and variable free semantics: The case of antecedent contained ellipsis. In Johnson, Kyle (ed.), Topics in ellipsis, 3068. Cambridge University Press.Google Scholar
Jaeger, Florian T. & Wasow, Thomas. 2005. Production-complexity driven variation: Relativizer omission in non-subjectextracted relative clauses. In The 18th CUNNY Sentence Processing Conference, Tucson, AZ.Google Scholar
Johnson, David & Lappin, Shalom. 1999. Local constraints vs economy (Stanford Monongraphs in Linguistics), Stanford: CSLI Publications.Google Scholar
Kaplan, Ronald M. & Bresnan, Joan. 1982. Lexical-functional grammar: A formal system for grammatical representation. In Bresnan, Joan (ed.), The mental representation of grammatical relations, 173281. MIT Press. Reprinted in Mary Dalrymple, Ronald Kaplan, John Maxwell and Annie Zaenen (eds.), Formal Issues in Lexical-Functional Grammar, 29–130. Stanford: CSLI Publications.Google Scholar
Kathol, Andreas. 1995. Linearization-based German syntax. Ph.D. dissertation, Ohio State University.Google Scholar
Kathol, Andreas. 2000. Linear syntax. New York, Oxford: Oxford University Press.Google Scholar
Kay, Paul & Fillmore, Charles. 1999. Grammatical constructions and linguistic generalizations: The what’s x doing y? construction. Language 75.1, 133.CrossRefGoogle Scholar
Kertz, Laura. 2010. Ellipsis reconsidered. Ph.D. dissertation, University of California, San Diego.Google Scholar
Kim, Christina S., Kobele, Gregory M., Runner, Jeffrey T. & Hale, John T.. 2011. The acceptability cline VP Ellipsis. Syntax 14.4, 318354.CrossRefGoogle Scholar
Kim, Jong-Bok. 2000. The grammar of negation: A constraint-based approach. Stanford, CA: CSLI Publications.Google Scholar
Kim, Jong-Bok & Sag, Ivan A.. 2002. French and English negation without head-movement. Natural Language & Linguistic Theory 20.2, 339412.CrossRefGoogle Scholar
Klein, Ewan & Sag, Ivan A.. 1985. Type-driven translation. Linguistics & Philosophy 8, 163201.CrossRefGoogle Scholar
Klemola, Juhani. 1998. Semantics of doin southwestern dialects of English. In van Ostade, Ingrid Tieken-Boon, van der Wal, Marijke & van Leuvensteijn, Arjan (eds.), DO in English, Dutch and German: History and present-day variation (Tübinger Beiträge zur Linguistik 491), 2551. Münster: Nodus Publikationen.Google Scholar
Klima, Edward S. 1964. Negation in English. In Fodor, Jerry A. & Katz, Jerrold J. (eds.), The structure of language: Readings in the philosophy of language, 246323. Prentice Hall.Google Scholar
Konieczny, Lars. 1996. Human sentence processing: A semantics-oriented parsing approach. Ph.D. dissertation, University of Freiburg.Google Scholar
Lambrecht, Knud. 1990. What, me, worry?: Mad magazine sentences revisited. Proceedings of the 16th Annual Meeting of the Berkeley Linguistics Society, 215228.Google Scholar
Lasnik, Howard. 1995. Verbal morphology: Syntactic structures meets the minimalist program. In Campos, Hector & Kempchinsky, Paula (eds.), Evolution and revolution in linguistic theory, 251275. Georgetown: Georgetown University Press.Google Scholar
Lasnik, Howard, Depiante, Marcela & Stepanov, Arthur. 2000. Syntactic structures revisited: Contemporary lectures on classic transformational theory. Cambridge, MA: MIT Press.Google Scholar
Lees, Robert B. 1957. Review of syntactic structures. Language 33.3, 375408.CrossRefGoogle Scholar
Levine, Robert D. 2012. Auxiliaries: To’s company. Journal of Linguistics 48, 187203.CrossRefGoogle Scholar
Levine, Robert D. 2013. The modal need VP gap (non)anomaly. In Csipak, Eva, Eckhardt, Regine, Liu, Mingya & Sailer, Manfred (eds.), Beyond ever and any: New perspectives on negative polarity sensitivity. Berlin: Mouton de Gruyter.Google Scholar
Lewis, John D. & Elman, Jeffrey L.. 2001. Learnability and the statistical structure of language: Poverty of stimulus arguments revisited. Proceedings of the 26th Annual Boston University Conference on Language Development, 359370. Cascadilla Press.Google Scholar
Linadarki, Evita. 2006. Linguistic and statistical extensions of data oriented parsing. Ph.D. dissertation, University of Essex.Google Scholar
Marcus, Gary, Vouloumanos, Athena & Sag, Ivan A.. 2003. Does Broca’s play by the rules? Nature Neuroscience 7, 652653.Google Scholar
McCawley, James D. 1968. Concerning the base component of a transformational grammar. Foundations of Language 4.1, 5581; Reprinted in Meaning and Grammar, 35–58. New York, NY: Academic Press. 1976.Google Scholar
Michaelis, Laura. 2011. Stative by construction. Linguistics 49, 13591400.CrossRefGoogle Scholar
Michaelis, Laura & Lambrecht, Knud. 1996. Toward a construction-based model of language function: The case of nominal extraposition. Language 72, 215247.CrossRefGoogle Scholar
Michaelis, Laura & Ruppenhofer, Josef. 2001. Beyond alternations: A constructional account of the applicative pattern in German. Stanford: CSLI Publications.Google Scholar
Mikkelsen, Line Hove. 2002. Specification is not inverted predication. Proceedings of NELS the North East Linguistic Society (NELS 32.2), 403422.Google Scholar
Miller, Philip. 2013. Usage preferences: The case of the English verbal anaphor do so . In Müller, Stefan (ed.), Proceedings of the 20th International Conference on Head-Driven Phrase Structure Grammar, 121139. Freie Universität Berlin.Google Scholar
Miller, Philip & Pullum, Geoffrey K.. 2013. Exophoric VP ellipsis. In Hofmeister, Philip & Norcliffe, Elisabeth (eds.), The core and the periphery: Data-driven perspectives on syntax inspired by Ivan A. Sag, 167220. CSLI Publications.Google Scholar
Miller, Philip & Pullum, Geoffrey K.. 2014. Exophoric VP ellipsis. In Hofmeister, Philip & Norcliffe, Elisabeth (eds.), The core and the periphery: Data-driven perspectives on syntax inspired by Ivan A. Sag, 532. Stanford, CA: CSLI Publications.Google Scholar
Miyao, Yusuke & Tsujii, Junichi. 2008. Feature forest models for probabilistic HPSG parsing. Computational Linguistics 34.1, 3580.CrossRefGoogle Scholar
Müller, Stefan. 1995. Scrambling in German – Extraction into the Mittelfeld . In T’sou, Benjamin K. & Yeung Lai, Tom Bong (eds.), Proceedings of the 10th Pacific Asia Conference on Language, Information and Computation, 7983. City University of Hong Kong.Google Scholar
Müller, Stefan. 1999. Deutsche Syntax deklarativ: Head-Driven Phrase Structure Grammar für das Deutsche (Linguistische Arbeiten 394), Tübingen: Max Niemeyer Verlag.CrossRefGoogle Scholar
Müller, Stefan. 2002. Blockaden und Deblockaden: Perfekt, Passiv und modale Infinitive. In Reitter, David (ed.), Proceedings of TaCoS 2002. Potsdam.Google Scholar
Müller, Stefan. 2004. An analysis of depictive secondary predicates in German without discontinuous constituents. In Müller, Stefan (ed.), Proceedings of the 11th International Conference on Head-Driven Phrase Structure Grammar, Center for Computational Linguistics, Katholieke Universiteit Leuven, 202222. Stanford: CSLI Publications.Google Scholar
Müller, Stefan. 2006. Phrasal or lexical constructions? Language 82.4, 850883.CrossRefGoogle Scholar
Müller, Stefan. 2007. Phrasal or lexical Constructions: Some comments on underspecification of constituent order, compositionality, and control. In Müller, Stefan (ed.), Proceedings of the 14th International Conference on Head-Driven Phrase Structure Grammar, 373393. Stanford: CSLI Publications.Google Scholar
Müller, Stefan & Wechsler, Steven. 2014. Lexical approaches to argument structure. Theoretical Linguistics 40, 176.CrossRefGoogle Scholar
Newmeyer, Frederick J. 1998. Language form and language function. Cambridge, MA: MIT Press.Google Scholar
Nunberg, Geoff. 2001. Shall We? (on the legal profession’s attachment to shall). California Lawyer, March.Google Scholar
Palmer, Frank R. 1965. A linguistic study of the English verb. London: Longmans.Google Scholar
Piattelli-Palmarini, Massimo. 1980. Language and learning: The debate between Jean Piaget and Noam Chomsky. Cambridge: Harvard University Press.Google Scholar
Pollard, Carl J. & Sag, Ivan A.. 1987. Information-based syntax and semantics, vol. 1 (CSLI Lecture Notes 13), Stanford: CSLI Publications [Distributed by University of Chicago Press].Google Scholar
Pollard, Carl J. & Sag, Ivan A.. 1994. Head-driven phrase structure grammar. Chicago: University of Chicago Press.Google Scholar
Potts, Christopher. 2003. The logic of conventional implicatures. Ph.D. dissertation, University of California, Santa Cruz.CrossRefGoogle Scholar
Pullum, Geoffrey K. & Gazdar, Gerald. 1982. Natural languages and context-free languages. Linguistics & Philosophy 4, 471504.CrossRefGoogle Scholar
Pullum, Geoffrey K. & Scholz, Barbara C.. 2002. Empirical assessment of stimulus poverty arguments. The Linguistic Review 19, 950.Google Scholar
Pullum, Geoffrey K. & Zwicky, Arnold M.. 1997. Licensing of prosodic features by syntactic rules: The key to auxiliary reduction. Presented at Annual Meeting of the Linguistic Society of America. [Abstract available at http://www-csli.stanford.edu/zwicky/LSA97.abst.pdf].Google Scholar
Quirk, Randolph, Greenbaum, Sidney, Leech, Geffrey & Svartvik, Jan. 1985. A comprehensive grammar of the English language. London: Longman.Google Scholar
Radford, Andrew. 2004. Minimalist syntax – exploring the structure of English (Cambridge Textbooks in Linguistics), Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Reali, Florencia & Christiansen, Morten H.. 2005. Uncovering the richness of the stimulus: Structure dependence and indirect statistical evidence. Cognitive Science 29, 10071028.CrossRefGoogle ScholarPubMed
Reape, Mike. 1994. Domain union and word order variation in German. In Nerbonne, John, Netter, Klaus & Pollard, Carl J. (eds.), German in Head-Driven Phrase Structure Grammar (CSLI Lecture Notes 46), 151197. Stanford University: CSLI Publications.Google Scholar
Ross, John R.1967. Constraints on variables in syntax. Ph.D. dissertation, MIT. [Published in 1986 as Infinite Syntax! Norwood, NJ: Ablex.]Google Scholar
Ross, John Robert. 1969. Auxiliaries as main verbs. In Todd, W. (ed.), Studies in philosophical linguistics Series 1. Evanston, IL: Great Expectations Press.Google Scholar
Sag, Ivan A. 1997. English relative clause constructions. Journal of Linguistics 33.2, 431484.CrossRefGoogle Scholar
Sag, Ivan A. 2010. English filler-gap constructions. Language 86, 486545.CrossRefGoogle Scholar
Sag, Ivan A. 2012. Sign-based construction grammar: An informal synopsis. In Boas & Sag (eds.), 69202.Google Scholar
Sag, Ivan A. & Nykiel, Joanna. 2011. Remarks on sluicing. In Müller, Stefan (ed.), Proceedings of the HPSG-2011 Conference, University of Washington, 188208. Stanford: CSLI Publications.Google Scholar
Sag, Ivan A. & Wasow, Thomas. 2015. Flexible processing and the design of grammar. Journal of Psycholinguistic Research 44, 4763.CrossRefGoogle ScholarPubMed
Sag, Ivan A., Wasow, Thomas & Bender, Emily M.. 2003. Syntactic theory: A formal introduction, 2nd edn. Stanford: CSLI Publications.Google Scholar
Scholz, Barbara C. & Pullum, Geoffrey K.. 2006. Irrational nativist exuberance. In Stainton, Robert (ed.), Contemporary debates in cognitive science, 5980. Oxford: Basil Blackwell.Google Scholar
Schütze, Carson T. 2004. Synchronic and diachronic microvariation in English do . Lingua 114, 495516.CrossRefGoogle Scholar
Shieber, Stuart M.1986. Introduction to unification-based approaches to grammar, (CSLI Lecture Notes Series 4). Stanford, CA: Center for the Study of Language and Information.Google Scholar
Slobin, Dan I. 1966. Grammatical transformations and sentence comprehension in childhood and adulthood. Journal of Verbal Learning and Verbal Behavior 5, 219227.CrossRefGoogle Scholar
Staab, Jenny. 2007. Negation in context: Electrophysiological and behavioral investigations of negation effects in discourse processing. UCSD/SDSU Doctoral dissertation.Google Scholar
Starosta, Stanley. 1985. The great AUX cataclysm. University of Hawaii Working Papers in Linguistics 17.2, 95114.Google Scholar
Steedman, Mark. 1996. Surface structure and interpretation Linguistic Inquiry Monograph No. 30. Cambridge, MA: MIT Press.Google Scholar
Steedman, Mark. 2000. The syntactic process Linguistic Inquiry Monograph No. 30. Cambridge, MA: MIT Press/Bradford Books.Google Scholar
Stump, Gregory T. 1985. The semantic variability of absolute constructions (Synthese Language Library). Dordrecht: Reidel.CrossRefGoogle Scholar
Tanenhaus, Michael, Eberhard, K., Spivey-Knowlton, M. & Sedivy, J.. 1995. Integration of visual and linguistic information during spoken language comprehension. Science 268, 16321634.CrossRefGoogle ScholarPubMed
Walter, Mary Ann & Jaeger, T. Florian. 2005. Constraints on complementizer/relativizer drop: A strong lexical ocp effect of that. Proceedings of the 41st annual meeting of the Chicago linguistics society. Chicago, IL: CLS.Google Scholar
Warner, Anthony. 2000. English auxiliaries without lexical rules. In Borsley(ed.), (Syntax and Semantics 32), 167220.Google Scholar
Warner, Anthony R.1993. The grammar of English auxiliaries: An account in HPSG. Research Paper YLLS/RP 1993-4 Department of Language and Linguistic Science University of York.CrossRefGoogle Scholar
Wason, Peter Cathcart. 1961. Response to affirmative and negative binary statements. British Journal of Psychology 63.2, 133142.CrossRefGoogle Scholar
Webelhuth, Gert, Koenig, Jean-Pierre & Kathol, Andreas (eds.). 1999. Lexical and constructional aspects of linguistic explanation. Stanford: CSLI Publications.Google Scholar
Zwicky, Arnold M. 1986. The unaccented pronoun constraint in English. In Zwicky, Arnold M. (ed.), Interfaces, Ohio state university (Working Papers in Linguistics 32), 100114. Columbus, OH: The Ohio State University, Department of Linguistics.Google Scholar
Zwicky, Arnold M. 1994. Dealing out meaning. Proceedings of the Twentieth Annual Meeting of the Berkeley Linguistics Society, 611625. Berkeley: BLS.Google Scholar
Zwicky, Arnold M. & Pullum, Geoffrey K.. 1983. Cliticization versus inflection: English n’t. Language 59, 502513.CrossRefGoogle Scholar

Altmetric attention score

Full text views

Full text views reflects PDF downloads, PDFs sent to Google Drive, Dropbox and Kindle and HTML full text views.

Total number of HTML views: 424
Total number of PDF views: 747 *
View data table for this chart

* Views captured on Cambridge Core between 03rd January 2019 - 19th January 2021. This data will be updated every 24 hours.

Access

Linked content

Please note a has been issued for this article.

Hostname: page-component-76cb886bbf-tvlwp Total loading time: 0.984 Render date: 2021-01-19T22:33:06.768Z Query parameters: { "hasAccess": "1", "openAccess": "0", "isLogged": "0", "lang": "en" } Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": false, "newCiteModal": false }