To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
According to word and paradigm morphology (Matthews 1974, Blevins 2016), the word is the basic cognitive unit over which paradigmatic analogy operates to predict form and meaning of novel forms. Baayen et al. (2019b, 2018) introduced a computational formalization of word and paradigm morphology which makes it possible to model the production and comprehension of complex words without requiring exponents, morphemes, inflectional classes, and separate treatment of regular and irregular morphology. This computational model, Linear Discriminative Learning (LDL), makes use of simple matrix algebra to move from words’ forms to meanings (comprehension) and from words’ meanings to their forms (production). In Baayen et al. (2018), we showed that LDL makes accurate predictions for Latin verb conjugations. The present study reports results for noun declension in Estonian. Consistent with previous findings, the model’s predictions for comprehension and production are highly accurate. Importantly, the model achieves this high accuracy without being informed about stems, exponents, and inflectional classes. The speech errors produced by the model look like errors that native speakers might make. When the model is trained on incomplete paradigms, comprehension accuracy for unseen forms is hardly affected, but production accuracy decreases, reflecting the well-known asymmetry between comprehension and production.
Generative syntax embodies three complementary goals, two of which are adopted by all practitioners. The first two goals characterize what a 'possible human language' might be and provide formal grammars of individual languages. Generative syntacticians have not been very concerned with methodology. Chomsky set the tone for this lack of interest in Syntactic Structures. The generative methodology section focuses on the relative merits of introspective versus conversational data. The methodology section evaluates the recent trend to admit more and more types of semantic data as evidence in syntactic theorizing. All formal generative approaches to syntax outside of P-and-P have their roots in the lexicalist hypothesis, first proposed in Chomsky. The typological goal has in general played a much more important role in Cognitive-Functional Linguistics than in generative grammar. Cognitive-functional linguists tend to prioritize conversational and experimental data over introspective, though their day-to-day practice generally relies on the latter.
This paper examines two contrasting perspectives on morphological analysis, and considers inflectional patterns that bear on the choice between these alternatives. On what is termed an ABSTRACTIVE perspective, surface word forms are regarded as basic morphotactic units of a grammatical system, with roots, stems and exponents treated as abstractions over a lexicon of word forms. This traditional standpoint is contrasted with the more CONSTRUCTIVE perspective of post-Bloomfieldian models, in which surface word forms are ‘built’ from sub-word units. Part of the interest of this contrast is that it cuts across conventional divisions of morphological models. Thus, realization-based models are morphosyntactically ‘word-based’ in the sense that they regard words as the minimal meaningful units of a grammatical system. Yet morphotactically, these models tend to adopt a constructive ‘root-based’ or ‘stem-based’ perspective. An examination of some form-class patterns in Saami, Estonian and Georgian highlights advantages of an abstractive model, and suggests that these advantages derive from the fact that sets of words often predict other word forms and determine a morphotactic analysis of their parts, whereas sets of sub-word units are of limited predictive value and typically do not provide enough information to recover word forms.
This paper argues that the term ‘passive’ has been systematically misapplied to a class of impersonal constructions that suppress the realization of a syntactic subject. The reclassification of these constructions highlights a typological contrast between two types of verbal diathesis and clarifies the status of putative ‘passives of unaccusatives’ and ‘transitive passives’ in Balto-Finnic and Balto-Slavic. Impersonal verb forms differ from passives in two key respects: they are insensitive to the argument structure of a verb and can be formed from unergatives or unaccusatives, and they may retain direct objects. As with other subjectless forms of personal verbs, there is a strong tendency to interpret the suppressed subject of an impersonal as an indefinite human agent. Hence impersonalization is often felicitous only for verbs that select human subjects.
The experimental results reported in Clahsen's target article clearly distinguish regular from irregular processes and suggest a basic difference between items that are productively formed and items which are stored in the lexicon. However, these results do not directly implicate any particular combinatory operation (such as affixation), nor do they distinguish inflectional items from other productive formations.
This paper proposes that unbounded dependency constructions in English instantiate a surface subject-predicate structure in which the predicate is typically discontinuous. Evidence is presented supporting this discontinuous analysis over the operatorvariable structure conventionally assigned to unbounded dependencies. A model of phrase structure that sanctions discontinuous representations is outlined, along with a feature-based strategy for generating the proposed structures within an extended phrase structure system. Extraction islands and other locality constraints are subsequently characterized with reference to the feature propagation paths that induce discontinuity.
Email your librarian or administrator to recommend adding this to your organisation's collection.