Hostname: page-component-7c8c6479df-r7xzm Total loading time: 0 Render date: 2024-03-28T23:57:06.174Z Has data issue: false hasContentIssue false

Editorial

Published online by Cambridge University Press:  29 June 2009

Rights & Permissions [Opens in a new window]

Abstract

Type
Editorial
Copyright
Copyright © Cambridge University Press 2009

This issue of Organised Sound explores Interactivity in Musical Instruments. Contributors were asked to consider questions such as: Are we really in control of our musical instruments, both from a pragmatic and theoretical perspective? The notion of control places the instrument itself in a passive position – that is, it becomes simply a machine/mechanism to be conquered. In reality, the instrument has a set of qualities that play an active role in the music-making process. In sound-based or electroacoustic music, complex interpolations of timbral characteristics are often required. Indeed, the morphology of an entire ‘orchestra’ of algorithms and/or sample re-synthesis may be under the control or manipulation of a single performer. Design approaches that allow a quick, accurate and intuitive engagement with the sound material are paramount.

Acoustic musicians often discuss the ways in which a fine instrument speaks, a turn of phrase that summarises the ease and speed at which the instrument sounds and the effort the musician must invest not only to excite the instrument to sound, but to produce a rich and sonorous tone.

Such an instrument, easily manipulable and dynamically variable, is by nature somewhat unstable. High performer skill levels guarantee the desired musical outcome; however, the musician is also aware that the instrument teeters on the brink of chaos. This quality, while seeming counter-intuitive, is a crucial counterpoint in virtuosic performance, where the musician moves past a conscious application of technique into a dynamic relationship with the instrument, the conduit being a cybernetic-like action-response loop, described by the response/timbral qualities of the instrument, the performer’s musical intent and other conditioning factors such as the acoustics of the performance space, received performance practice, and so on.

The notion of control is then potentially problematic. Perhaps the electroacoustic music performer is not so much ‘in control’ when navigating the potentials inherent in the work. If this is so, then performance gestures take on a very different function; their designation moves from an event-based classification to encompass the notion of gesture as form and timbre as inter-relationships, influencing orchestration, focus or structural evolution as the performance/musical work evolves.

It becomes clear that a rewarding musical performance depends on the qualities of both the performer/musician and the instrument. In order to understand this relationship, the unstable, chaotic qualities of high-calibre instruments need to be described and quantified. Such research would represent some of the ‘magic’ an experienced musical instrument maker (luthier) brings to the task of evolving instrument design – resulting in the sought-after sound of a Guarneri del Gesù or Antonio Stradivarius violin, or indeed of the products of a host of contemporary luthiers. Such research would also inform the design of NIME (new interfaces for musical expression), hopefully producing more musical, expressive and flexible instruments.

Perhaps the reward of a ‘musical’ instrument is to be found in the dynamics of controllable/steerable chaos, which, if it is in fact true, might question the predominance of direct mapping approaches. If playing a musical instrument is represented as a point of influence within a dynamical system, then mapping is consequently never direct – an input is never more than one of several influences within a dynamical system. Clearly a hierarchy exists. Without air the wind and brass families fall silent; without the excitation moment of the bow the string family do likewise.

All of these dilemmas sit at the core of the articles contributed to this edition of Organised Sound.

Drummond takes up the discussion around the word ‘interactivity’. He seeks to contextualise the term in musical activities including shared control and collaboration. Drummond argues that the development of a coherent conceptual framework for ‘interactivity’ is critical for the development of new approaches to music-making. He undertakes a review of these ideas in interactive composition and presents a number of classifications and models that have been discussed in the literature, but rarely brought together in this manner. A discussion of notions of control and of mapping round out this useful review of the relationships of the performer, the interface, the instrument and the performance.

Schroeder and Rebelo take us to the phenomenological, drawing on the French philosopher Maurice Merleau-Ponty to assist in examining the relationship between the performers’ bodies and their instruments. The issue of embodied knowledge is vital in both the learning and teaching of musical performance skills and is described in the relationship the musician has to/with his or her instrument. The Descartes mind/body split is examined in terms of the way a performer ‘senses’ an instrument, and Bergson’s view of the body as an instrument of action is outlined, leading to Merleau-Ponty’s discussion of the tool as an extension of ourselves, something we inhabit. As such, the ‘instrument transcends its existence as a tool’, becoming an object with which we perceive. This philosophical discussion is grounded in a 2007 study which examined ‘how performers deal with a variety of conditions that characterise network performance’. I believe that a good understanding of the phenomenology of the relationship between performers and their instruments will assist researchers and developers to bring the ‘x-factor’ into the equation when designing and evaluating new instruments and interfaces. Without it, interfaces are largely an engineering solution. The fact that most new interfaces developed during the last few decades have never received public acclaim illustrates the need to look beyond the technical challenges towards engagement and embodiment, surely the very zenith of expression.

As an adjunct to the phenomenological discussion, I present a paper on the WiiMote. As a new interface, its acceptance and uptake have shot up to previously unknown levels. Something about the nature of the interface appears to be rewarding to musicians from novice to expert. The WiiMote accentuates the physicality of music-making; it brings gesture to the forefront of interface design, and through it can communicate musical intention and authenticity in performance. The paper examines and presents research undertaken with highly skilled acoustic musicians in an effort to understand the relationship between the control parameters available on acoustic instruments and the sonic outcome. A generic model was developed which has been applied to the WiiMote when mapping its dynamic and momentary controllers in a work for hurdy-gurdy and live electronics (Kyma system), which uses the WiiMote as the performance interface.

Whalley presents a survey of software-based agents and their application in music-making processes. He presents a theoretical framework for its use in creating music/sound art and moves into a discussion of a new ‘hybrid model that integrates non-linear, generative, conversational and affective perspectives on interactivity’. The paper takes the form of a dialogue between developments in the computer sciences, and their possible application to music-making, where emotion and expression are paramount. Whalley delves into the interdisciplinary nature of this field by presenting a new model of how they interact and support each other. He proposes a model for a multidimensional AI decision space germane to music in addition to discussing a new model for interactivity, of which the performance/reactive approach is most pertinent within the broader discourse of this edition.

Magnusson discusses musical instruments as cognitive extensions. Drawing on Merleau-Ponty’s pupil Don Idhe, Magnusson argues that: ‘many digital instruments are to be seen primarily as extensions of the mind rather than the body’, a point that many will find challenging. He defines a ‘computational music system as an epistemic tool, as an instrument (organon) whose design, practice and often use are primarily symbolic’. This discussion opens up questions such as: what defines an instrument? What defines the role of gesture in performance, the nature of musical performance itself? Bruno Latour’s ideas of concretisation are discussed as representative of the unity of multiple entities working towards the same cause, the perception of a digital instrument as an integrated whole rather than a set of synthesis algorithms, a set of mappings and an interface. Magnusson defines the digital instrument as an ‘epistemic tool (a conveyor of knowledge used by an extended mind)’, where the extended mind is that of the performer, embedded in the instrument through the musician’s relationship with it in the context of performance.

Van Nort’s article ‘Instrumental Listening: sonic gestures as design principle’, seeks to expand the models applied to the conception and design of new musical interfaces by addressing gesture from the point of view of the perception of human intentionality in sound. The action–sound coupling is understood as musical when applied to musical instruments. Van Nort takes this as the basis for a proposed analysis framework and design methodology for new interfaces. The question of whether or how performance gestures relate to musical structure and form is examined through the lens of Schaeffer, and includes an examination of the perceptual criteria of sonic gesture analysis. This research is applied to a granular synthesis instrument through the use of empirical mode decomposition (EMD), which is decomposed into a set of intrinsic mode functions (IMFs). As such, Van Nort draws on phenomenology in developing a signal processing innovation for instrument design and parameter analysis.

Bown, Eldridge and McCormack bring social–artistic relationships to the centre of design considerations in their article ‘Understanding Interaction in Contemporary Digital Music: from instruments to behavioural objects’. They seek to redefine the role of software in musical culture through two types of agency: performative agency and memetic agency. The predominant acoustic paradigm is dissected and its correlates in digital music practice identified. In many cases these are a multifaceted construct, where the understood relationships of past practices become unclear. For instance, a discussion of ‘composing instruments’ has no place in the acoustic paradigm, where the luthier is unlikely also to be the composer. A relationship certainly exists between the development of an instrument and the compositional demands made on the performer; however, they are identifiable as independent streams of practice, albeit contributing to the same end. Such distinctions have become unmanageable in digital music. The paper brings behavioural objects and memetic and performative agency together to describe a fundamentally different relationship between people and objects (software) in the context of musical performance.

Taxonomies are rare in the new interface area because they constitute a relatively young research domain, and because devices vary so much from one to another that the heterogeneity makes finding common ground difficult. Nevertheless, in order to define a design space that allows researchers to build on each other’s work, and define a common ground for comparison and usability/expression testing, a taxonomy becomes critical. Essl and Rohs’ article ‘Interactivity for Mobile Music-Making’, starts by defining such a taxonomy for the sensor capabilities of mobile phones. They proceed to ask how ‘these technological choices impact and inform emerging musical practice’, and discuss possible gesture spaces for accompanist gestures and figurative gestures, and how these become a useful perceptual constraint for interaction in the context of body-centred performance. A discussion of the application of all of the sensing modalities available in a mobile phone is included, leading to a discussion of development frameworks for mobile musical instruments. The article concludes with a discussion of the application of CaMus, ShaMus, MiMus and Fendrix, utilising visual tracking including analysis of optical flow, accelerometers, audio analysis and multitouch screen interaction.

Rasamimanana, Kaiser and Bevilacqua present an experimental study on articulation in bowed strings. As in my own discussion of musical instrument control, this paper outlines articulation as a first-order control rather than a second-order variation of the sounding of a note. The moments between notes take on a critical importance with a detailed consideration of the transient phase. High Resolution Methods (HRM) model the deterministic components of the transient sound as exponentially modulated sinusoids, which the authors report providing higher frequency resolution than FFT analysis for short-analysis windows. They discovered that ‘different bowing techniques imply distinct motion–sound relationships’, and that gestures should not be considered so much as a stream of data as much as a temporal event, in which the time relationships between control and sound parameters are very complex, and that this complexity is not represented by current approaches to direct mapping of interface variable to synthesis algorithms.

Dan Overholt concludes this issue with a presentation of his Musical Interface Technology Design Space (MITDS), which seeks to provide a theoretical framework for the iterative development of an interactive musical instrument. Overholt is focused on working within multidimensional parameter spaces for musical composition and performance by applying the MITDS to the complex consideration of the relationships between human, performative gestures and complex, multivariable synthesis algorithms.

In conclusion, this issue of Organised Sound adopts a visionary approach, seeking to present ‘over the horizon’ ideas relating to electroacoustic music instrument and interface development. It seeks to re-invigorate discussion about the component parts on which successful musical instrument development is dependent. Considerations of visceral and behavioural levels are enshrined in the kinetic gesturing that brings about musical outcomes. The new interface/instrument designer’s toolkit needs to grow beyond the technical knowledge of sensors and programming languages, of sound analysis and human computer interaction (HCI) to include a detailed understanding of the phenomenological relationship between the instrument and performance, the multifaceted and somewhat dynamic roles of the various stakeholders, and the changing understanding of gesture as not so much a stream of data as a temporal event. I hope this issue or Organised Sound provides you with a great deal to think about and inspires some passionate and constructive discussions.