To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The study of gesture-the movements people make with their hands when talking-has grown into a well-established field and research is still being pushed into exciting new directions. Bringing together a team of leading scholars, this Handbook provides a comprehensive overview of gesture studies, combining historical overviews as well as current, concise snapshots of state-of-the-art, multidisciplinary research. Organised into five thematic parts, it considers the roles of both psychological and interactional processes in gesture use, and considers the status of gesture in relation to language. Attention is given to different theoretical and methodological frameworks for studying gesture, including semiotic, linguistic, cognitive, developmental, and phenomenological theories and observational, experimental, corpus linguistic, ethnographic, and computational methods. It also contains practical guidelines for gesture analysis along with surveys of empirical research. Wide ranging yet accessible, it is essential reading for academic researchers and students in linguistics and cognitive sciences.
Previous research has shown language-specific features play a guiding role in how children develop expression of events with speech and gestures. This study adopts a multimodal approach and examines Mandarin Chinese, a language that features context use and verb serializations. Forty children (four-to-seven years old) and ten adults were asked to describe fourteen video stimuli depicting different types of causal events involving location/state changes. Participants’ speech was segmented into clauses and co-occurring gestures were analyzed in relation to causation. The results show that the older the children, the greater the use of contextual clauses which contribute meaning to event descriptions. It is not until the age of six that children used adult-like structures – namely, using single gestures representing causing actions and aligning them with verb serializations in single clauses. We discuss the implications of these findings for the guiding role of language specificity in multimodal language development.
As an explicitly usage-based model of language structure (Barlow & Kemmer, 2000), cognitive grammar draws on the notion of ‘usage events’ of language as the starting point from which linguistic units are schematized by language users. To be true to this claim for spoken languages, phenomena such as non-lexical sounds, intonation patterns, and certain uses of gesture should be taken into account to the degree to which they constitute the phonological pole of signs, paired in entrenched ways with conceptual content. Following through on this view of usage events also means realizing the gradable nature of signs. In addition, taking linguistic meaning as consisting of not only conceptual content but also a particular way of construing that content (Langacker, 2008, p. 43), we find that the forms of expression mentioned above play a prominent role in highlighting the ways in which speakers construe what they are talking about, in terms of different degrees of specificity, focusing, prominence, and perspective. Viewed in this way, usage events of spoken language are quite different in nature from those of written language, a point which highlights the need for differentiated accounts of the grammar of these two forms of expression taken by many languages.
Recent embodied theories of meaning known as ‘simulation semantics’ posit that language comprehension engages, or even amounts to, mental simulation. What is meant here by ‘language comprehension’, however, deviates from the perspectives on interpersonal communication adhered to by researchers in social psychology and interactional linguistics. In this paper, we outline four alternative perspectives on comprehension in spoken interaction, each of which highlights factors that have remained largely outside the current purview of simulation theories. These include perspectives on language comprehension in terms of (i) striving for inter-subjective conformity; (ii) recognition of communicative intentions; (iii) prediction and anticipation in a dynamic environment; and (iv) integration of multimodal cues. By contrasting these views with simulation theories of comprehension, we outline a number of fundamental differences in terms of the kind of process comprehension is assumed to be (passive and event-like versus active and continuous), as well as the kind of stimulus that language is assumed to be (comprising unimodal units versus being multimodal and distributed across conversational turns). Finally, we discuss potential points of connection between simulation semantics and research on spoken interaction, and touch on some methodological implications of an interactive and multimodal reappraisal of simulation semantics.