Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-05-15T06:30:42.105Z Has data issue: false hasContentIssue false

2 - Light and Deep Parsing: A Cognitive Model of Sentence Processing

from Part II - Models of Neural and Cognitive Processing

Published online by Cambridge University Press:  30 November 2017

Philippe Blache
Affiliation:
Laboratoire Parole et Langage, CNRS & Aix-Marseille Université, France
Thierry Poibeau
Affiliation:
Centre National de la Recherche Scientifique (CNRS), Paris
Aline Villavicencio
Affiliation:
Universidade Federal do Rio Grande do Sul, Brazil
Get access

Summary

Abstract

Humans process language quickly and efficiently, despite the complexity of the task. However, classical language-processing models do not account well for this feature. In particular, most of them are based on an incremental organization, in which the process is homogeneous and consists in building step-by-step a precise syntactic structure, from which an interpretation is calculated. In this chapter, we present evidence that contradicts this view, and show that language processing can be achieved at varying levels of precision. Often, processing remains shallow, leaving interpretation greatly underspecifie.

We propose a new language-processing architecture, involving two types of mechanisms. We show that, inmost cases, shallow processing is sufficient and deep parsing is required only when faced with difficulty. The architecture we propose is based on an interdisciplinary perspective in which elements from linguistics, natural language processing, and psycholinguistics come into play.

Introduction

How humans process language quickly and efficiently remains largely unexplained. The main difficulty is that, although many disciplines (linguistics, psychology, computer science, and neuroscience) have addressed this question, it is difficult to describe language as a global system. Typically, no linguistic theory entirely explains how the different sources of linguistic information interact. Most theories, and then most descriptions, only capture partial phenomena, without providing a general framework bringing together prosody, pragmatics, syntax, semantics, etc. For this reason, many linguistic theories still consider language organization as modular: linguistic domains are studied and processed separately, their interaction is implemented at a later stage. As a consequence, the lack of a general theory of language, accounting for its different aspects, renders difficult the elaboration of a global processing architecture. This problem has direct consequences for natural language processing: the classical architecture relies on different subtasks: segmenting, labeling, identifying the structures, interpreting, etc. This organization more or less strictly imposes a sequential view of language processing, considering in particular words as being the core of the system. Such a view does not account for the fact that language is based on complex objects, made of different and hetergeneous sources of information, interconnected at different levels, and which interpretation cannot always be done compositionally (each information domain transferring a subset of information to another).

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abney, S. (1991). Parsing by chunks. In Principle-Based Parsing: Computation and Psycholinguistics, Dordrecht; Boston: Kluwer Academic Publishers, pages 257–278.Google Scholar
Adriaens, G. and Hahn, U., editors (1994). Parallel Natural Language Processing. Norwood, NJ: Ablex Publishing Corporation.
Altmann, G. T. M. and Mirković, J. (2009). Incrementality and prediction in human sentence processing. Cognitive Science, 33(4): 583–609.CrossRefGoogle ScholarPubMed
Baldwin, T., Dras, M., Hockenmaier, J., King, T. H., and van Noord, G. (2002). The impact of deep linguistic processing on parsing technology. In Proceedings of IWPT-2007.
Balfourier, J.-M., Blache, P., and Rullen, T. V. (2002). From shallow to deep parsing using constraint satisfaction. In Proc. of the 6th Int'l Conference on Computational Linguistics (COLING 2002).CrossRef
Blache, P. (2000). Property grammars and the problem of constraint satisfaction. In Linguistic Theory and Grammar Implementation, ESSLLI 2000 workshop.
Blache, P. and Rauzy, S. (2004). Une plateforme de communication alternative. In Actes des Entretiens Annuels de l'Institut Garches, pages 82–93. Issy-Les-Moulineaux, France.
Cappelle, B., Shtyrov, Y., and Pulvermüller, F. (2010). Heating up or cooling up the brain? MEG evidence that phrasal verbs are lexical units. Brain and Language, 115(3), 189–201.CrossRefGoogle ScholarPubMed
Chomsky, N. (1981). Lectures on Government and Binding. Dordrecht; Cinnaminson, NJ: Foris Publications.Google Scholar
Copestake, A., Flickinger, D., Pollard, C., and Sag, I. (2001). Minimal recursion semantics: An introduction. In Language and Computation (L&C), volume 1, pages pp. 1–47. Oxford: Hermes Science Publishing.Google Scholar
de Marneffe, M.-C. and Manning, C. D. (2008). Stanford typed dependencies manual. Technical report, Stanford Parser v. 3.5.2.
Ferreira, F. and Patson, N. D. (2007). The “good enough” approach to language comprehension. Language and Linguistics Compass, 1(1).CrossRefGoogle Scholar
Fillmore, C. J. (1988). The mechanisms of “construction grammar.” In Proceedings of the Fourteenth Annual Meeting of the Berkeley Linguistics Society, pages 35–55.CrossRef
Fodor, J. (1983). TheModularity of Mind: An Essay on Faculty Psychology. Cambridge, MA: MIT Press.Google Scholar
Fodor, J. D. and Ferreira, F. (1998). Reanalysis in Sentence Processing, Dordrecht; Boston: Kluwer Academic Publishers.CrossRefGoogle Scholar
Fodor, J. and Inoue, A. (1998). Attach anyway. In Fodor, J. and Ferreira, F., editors, Reanalysis in Sentence Processing. Dordrecht; Boston: Kluwer Academic Publishers.CrossRefGoogle Scholar
Frazier, L. and Fodor, J. D. (1978). The sausage machine: A new two-stage parsing model. Cognition, 6(4): 291–325.CrossRefGoogle Scholar
Friederici, A. D. (2002). Towards a neural basis of auditory sentence processing. Trends in Cognitive Sciences, 6(22): 78–84.CrossRefGoogle ScholarPubMed
Friederici, A. D. (2011). The brain basis of language processing: From structure to function. Physiological Reviews, 91(4): 1357–1392.CrossRefGoogle ScholarPubMed
Gazdar, G., Klein, E., Pullum, G., and Sag, I. (1985). Generalized Phrase Structure Grammar. Oxford: Blackwell.Google Scholar
Gibson, E. (2000). The Dependency Locality Theory: A Distance-Based Theory of Linguistic Complexity. In Marantz, A., Miyashita, Y., and O'Neil, W., editors, Image, Language, Brain, pages 95–126. Cambridge, MA: MIT Press.Google Scholar
Goldberg, A. (1995). Constructions: A Construction Grammar Approach to Argument Structure. Chicago: Chicago University Press.Google Scholar
Goldberg, A. E. (2003). Constructions: A new theoretical approach to language. Trends in Cognitive Sciences, 7(5):219–224.CrossRefGoogle Scholar
Goldberg, A. E. (2006). Constructions at Work: The Nature of Generalization in Language. Oxford, UK: Oxford University Press.Google Scholar
Gorrell, P. (1995). Syntax and Parsing. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Grodner, D. J. and Gibson, E. A. F. (2005). Consequences of the serial nature of linguistic input for sentenial complexity. Cognitive Science, 29: 261–291.CrossRefGoogle ScholarPubMed
Hagoort, P. (2005). On Broca, brain, and binding: a new framework. Trends in Cognitive Sciences, 9(9): 416–423.CrossRefGoogle ScholarPubMed
Hagoort, P. (2013). Muc (memory, unification, control) and beyond. Frontiers in Psychology, 4: 416.CrossRefGoogle ScholarPubMed
Jackendoff, R. (2007). A parallel architecture perspective on language processing. Brain Research, 1146(2–22).CrossRefGoogle ScholarPubMed
Jindal, P., Roth, D., and Kale, L. (2013). Efficient development of parallel nlp applications. Technical report, Tech. Report of IDEALS (Illinois Digital Environment for Access to Learning and Scholarship).
Joshi, A. K. and Schabes, Y. (1997). Tree-adjoining grammars. In Rozenberg, G. and Salomaa, A., editors, Handbook of Formal Languages, volume 3: Beyond Words, pages 69–124. New York: Springer.Google Scholar
Kaan, E. (2007). Event-related potentials and language processing: A brief overview. Language and Linguistics Compass, 1(6).CrossRefGoogle Scholar
Keller, F. (2010). Cognitively plausible models of human language processing. Proceedings of the ACL 2010 Conference Short Papers, pages 60–67.Google Scholar
Kutas, M., Petten, C. K. V., and Kluender, R. (2006). Psycholinguistics electrified ii: 1994–2005. In Gernsbacher, M. A. and Traxler, M., editors, Handbook of Psycholinguistics, pages 659–724. Boston: Elsevier.Google Scholar
Luck, S. J. (2005). An Introduction to the Event-Related Potential Technique. Boston: MIT Press.Google Scholar
MacDonald, M., Pearlmutter, N., and Seidenberg, M. (1994). The lexical nature of syntactic ambiguity resolution. Psychological Review, 101: 676–703.CrossRefGoogle ScholarPubMed
Marslen-Wilson, W. and Tyler, L. (1980). The temporal structure of spoken language understanding. Cognition, 8: 1–71.CrossRefGoogle ScholarPubMed
Pollard, C. and Sag, I. (1994). Head-driven Phrase Structure Grammars. Center for the Study of Language and Information Publication (CSLI), Chicago: Chicago University Press.Google Scholar
Pulvermüller, F. (2010). Brain embodiment of syntax and grammar: Discrete combinatorial mechanisms spelt out in neuronal circuits. Brain and Language, 112(3): 167–179.CrossRefGoogle ScholarPubMed
Pulvermüller, F., Shtyrov, Y., Hasting, A. S., and Carlyon, R. P. (2008). Syntax as a reflex: Neurophysiological evidence for early automaticity of grammatical processing. Brain and Language, 104(3): 244–253.CrossRefGoogle ScholarPubMed
Rauzy, S. and Blache, P. (2012). Robustness and processing difficulty models. A pilot study for eye-tracking data on the French treebank. In Proceedings of the 1st Eye-Tracking and NLP workshop.
Rayner, K. and Clifton, C. (2009). Language processing in reading and speech perception is fast and incremental: Implications for event-related potential research. Biological Psychology, 80(1): 4–9.CrossRefGoogle ScholarPubMed
Rommers, J., Dijkstra, T., and Bastiaansen, M. (2013). Context-dependent semantic processing in the human brain: Evidence from idiom comprehension. Journal of Cognitive Neuroscience, 25(5): 762–776.CrossRefGoogle ScholarPubMed
Schuett, S., Heywood, C. A., Kentridge, R. W., and Zihl, J. (2008). The significance of visual information processing in reading: Insights from hemianopic dyslexia. Neuropsychologia, 46(10): 2445–2462.CrossRefGoogle ScholarPubMed
Spivey, M. J. and Tanenhaus, M. K. (1998). Syntactic ambiguity resolution in discourse: Modeling the effects of referential context and lexical frequency. Journal of Experimental Psychology: Learning, Memory and Cognition, 24: 1521–1543.Google ScholarPubMed
Steedman, M. (2000). Information structure and the syntax-phonology interface. Linguistic Inquiry, 31: 649–689.CrossRefGoogle Scholar
Sturt, P. and Lombardo, V. (2005). Processing coordinated structures: Incrementality and connectedness. Cognitive Science, 29(2).CrossRefGoogle ScholarPubMed
Swets, B., Desmet, T., Clifton, C., and Ferreira, F. (2008). Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1): 201–216.CrossRefGoogle ScholarPubMed
Uszkoreit, H. (2002). New chances for deep linguistic processing. In proceedings of COLING-02.
Vespignani, F., Canal, P., Molinaro, N., Fonda, S., and Cacciari, C. (2010). Predictive mechanisms in idiom comprehension. Journal of Cognitive Neuroscience, 22(8): 1682–1700.CrossRefGoogle ScholarPubMed
Werning, M., Hinzen, W., and Machery, E. (2012). The Oxford Handbook of Compositionality. Oxford, UK: Oxford University Press.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×