Book contents
- Frontmatter
- Contents
- List of figures
- List of tables
- List of contributors
- Acknowledgements
- 1 Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech
- Part I Phonological structure in signed languages
- Part II Gesture and iconicity in sign and speech
- Part III Syntax in sign: Few or no effects of modality
- Part IV Using space and describing space: Pronouns, classifiers, and verb agreement
- Index
Part III - Syntax in sign: Few or no effects of modality
Published online by Cambridge University Press: 22 September 2009
- Frontmatter
- Contents
- List of figures
- List of tables
- List of contributors
- Acknowledgements
- 1 Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech
- Part I Phonological structure in signed languages
- Part II Gesture and iconicity in sign and speech
- Part III Syntax in sign: Few or no effects of modality
- Part IV Using space and describing space: Pronouns, classifiers, and verb agreement
- Index
Summary
Within the past 30 years, syntactic phenomena within signed languages have been studied fairly extensively. American Sign Language (ASL) in particular has been analyzed within the framework of relational grammar (Padden 1983), lexicalist frameworks (Cormier 1998, Cormier et al. 1999), discourse representation theory (Lillo-Martin and Klima 1990), and perhaps most widely in generative and minimalist frameworks (Lillo-Martin 1986; Lillo-Martin 1991; Neidle et al. 2000). Many of these analyses of ASL satisfy various syntactic principles and constraints that are generally taken to be universal for spoken languages (Lillo-Martin 1997). Such principles include Ross's (1967) Complex NP Constraint (Fischer 1974), Ross's Coordinate Structure Constraint (Padden 1983), Wh-Island Constraint, Subjacency, and the Empty Category Principle (Lillo-Martin 1991; Romano 1991).
The level of syntax and phrase structure is where sequentiality is perhaps most obvious in signed languages, and this may be one reason why we can fairly straightforwardly apply many of these syntactic principles to signed languages. Indeed, the overall consensus seems to be that the visual–gestural modality of signed languages results in very few differences between the syntactic structure of signed languages and that of spoken languages.
The three chapters in this section support this general assumption, revealing minimal modality effects at the syntactic level. Those differences that do emerge seem to based on the use of the signing space (as noted in Lillo-Martin's chapter; Chapter 10) or on nonmanual signals (as noted in the Pfau and Tang and Sze chapters; Chapters 11 and 12).
- Type
- Chapter
- Information
- Modality and Structure in Signed and Spoken Languages , pp. 237 - 240Publisher: Cambridge University PressPrint publication year: 2002