Student Symposium 2011 Abstracts

Oral Presentations (9:00-10:20 & 1:00-2:40)/ Présentations orales (9:00-10:20 et 13:00-14:40)

9:00 Charalampos Saitis: Investigating perceptual aspects of violin quality

A perceptual experiment was designed to investigate how consistent violinists are at evaluating violin quality. The objective was to examine both intra- and inter-subject consistency across a certain range of violins. Skilled classical violinists were asked to play a set of different violins, evaluate their quality, and order them by preference. The results indicate that violin players are selfconsistent when evaluating different violins in terms of overall preference.  However, a significant lack of agreement between individual violinists was observed. A second perceptual experiment was thus designed to further investigate whether there will be more between-individual agreement if violin players are asked to focus on specic features of the instrument. Skilled violinists were asked to play a set of different violins and evaluate them according to various criteria. The criteria were determined based on (a) the analysis of verbal data collected in the first experiment, and (b) potential correlation with measured vibrational properties of the violin. Preliminary results from a pilot study (small group of instruments) indicate that between-individual agreement varies considerably when rating specic instrument characteristics. Results of a more in-depth study (with more instruments, some very similar to one another) will be presented.

 

9:20 Michel Bernays: Verbal expression of Piano Timbre: Multidimensional semantic space of adjectival descriptors

Timbre is a key to musical expressivity in virtuosic pianistic performance. When discussed amongst professionals, timbre is described with a specialized lexicon, essentially inherited from the learning process, whose imagery aims at fitting the sonic nuances. Despite this abstract upbringing, pianists seem to agree on the verbal ways of timbre description. Such vocabulary and its structure were then explored, by asking pianists to rate their familiarity with and the semantic similarities between the fourteen descriptors of most common use (from Bellemare & Traube, 2006). From the questionnaires’ answers, timbre descriptors and their semantic relationships were thus represented with sufficient fit in a four-dimensional space. Neighbouring terms grouping and additional cluster analysis set apart five subsets of terms. With the most familiar term selected in each subset, we gathered that this representational space of piano timbre descriptors could be convincingly described by the five terms Round, Bright, Dry, Dark and Velvety.

 

9:40 Frederic Chiasson: Koechlin's volume interdisciplinary project: results in scientific research, composition and computer-aided orchestration

Charles Koechlin's orchestration treatise (Traité de l'orchestration) ascribes different dimensions to instrumental and orchestral timbre than those usually discussed in timbre research. At the core of this system is the volume or largeness, related to auditory size, and intensité, related to loudness. Koechlin gives a mean volume scale for most orchestral instruments, but scientific research have not found any evidence of a common perception of such scale. In this presentation, I will discuss the results of the interdisciplinary project on Koechlin's volume. Preliminary results of the study by Chiasson, Traube, Lagarrigue, Smith & McAdams suggest that participants do hear volume as predicted by Koechlin's volume scale. Native language, musical training and hearing have no significant effect on results. I will then explain how Koechlin's volume and intensity have been used in my orchestral piece Intercosmos and in possible improvements of Orchidée, the computer-aided orchestration software.

 

10:00 SongHui Chon: Investigation of the role of timbre saliency in blending and orchestration

This project is to investigate the role of timbre saliency in blending and orchestration. Timbre is what enables us to distinguish two instruments playing the same note at the same loudness level. We conjecture that different instrument timbres have different levels of saliency, which we call “timbre saliency,” which may explain some underlying principles behind well-known examples in orchestration treatises. In this project, timbre saliency space is first determined by analyzing data from a tapping experiment. A list of perceptually relevant acoustic features is also revealed. Next, the perception of blending of two unison timbres is examined, the result of which is analyzed in terms of saliency levels of the two timbres. A list of common acoustic features between timbre saliency and blending is be determined in this step, which will be helpful in future research on the impact of timbre saliency on the perception of multiple voices in music.

 

13:00 Philippe-Aubert Gauthier et al: Microphone array signal processing for the characterization and extrapolation of sound fields in aircraft cabins

Sound field extrapolation (SFE) aims at the prediction of a sound field in a domain surrounding a limited region over which the sound field is measured using a microphone array. SFE finds application in noise source identification, sound source localization, and sound field measurement for spatial audio. Sound field characterization (SFC) aims at a more generic description of a measured or extrapolated sound field using different physical or subjective metrics. This presentation will summarize our recent developments on SFE and SFC applied to the specific case of aircraft cabin sound field reproduction. An experimental validation of a SFE method recently developed is presented. The proposed SFE method is based on an inverse problem formulation combined with a recently proposed regularization approach: a beamforming matrix in the discrete smoothing norm of a quadratic cost function. In a second step, the results obtained from the SFE method are applied to SFC.

 

13:20 Andie Sigler: A computational theory of melodic structure and implications for listener expectation

A computational methodology based on simple and recursive patterns is proposed that allows a deep analytic engagement with melody. An algorithm discovers simple (recursive) interlocking structures that describe patterns of motion and parallels in the melody. The method neither requires nor produces segmentations, instead modeling the dynamic flow of an entire melody. A system is developed to find and repair irregularities in the patterns found, offering “simplifying” variations of the melody. These variations aim to be adequate as music in their own right, retaining the spirit of the original, while being subjectively simpler. They propose potential “backgrounds” against which the original (which in comparison “escapes” from a more rigid pattern) can be heard. A set of testable hypotheses about listener expectations is suggested by the dynamic implications of simple and recursive patterns (configurations affording rule inference) inherent in music and the effects of violation of such patterns.

 

13:40 Amandine Pras: The impact of listening between takes and getting feedback from an external record producer on musicians' experience during recording sessions

While recording in studio, musicians encounter challenges that differ from concert situations. In a study conducted at NYU Steinhardt, we investigated the impact of listening between takes in the control room and/or getting feedback from an external record producer on musicians’ experience during recording session. We collected data from 25 jazzmen grouped into five ensembles after they had recorded three complete takes of a musical composition under an experimental condition (with or without listening between takes combined with or without a producer) and after they had listened to these takes a few weeks later. Musicians perceived listening between takes in the control room as the most efficient method. However, we found that record producers’ feedback had the best artistic impact on the evolution of the takes. Participants reported that both the presence of a producer and listening between takes gave common ground to the ensemble but made them too self-conscious.

 

14:00 Trevor Knight: Performance analysis and visualization in Open Orchestra

As part of the larger Open Orchestra project, the presented work examines the potential for computer analysis and visualization of a student musician's performance for learning purposes. The work compares a student's instrumental performance to a reference audio track of the same part played by an expert musician. This audio-to-audio comparison allows for greater nuance and detail in analysis than using a symbolic representation. Extracting basic signal features and segmenting the audio allows comparisons of musical features. The system then uses the data to generate comparative visualizations that aim to help the student hear how and where the two performances differ.

 

14:20 Vanessa Yaremchuk: Artificial neural networks for the analysis of gesture in musical performance

What we see during a musical performance has an effect on our perception of the sound. There is a great deal of visual information to consider during a performance, ranging from properties of the environment like lighting or furnishings to the physically observable attributes of the performer or their aesthetic. Motion capture methods allow us to extract performer movements from the rest of this visual information. We are left only with data about how a musician moves. There are many interesting research questions concerning the study of movement consistency within and across performers: Are there, for instance, defining movement styles for a given instrument? How might movement measurements be compared across performers? The research uses Artificial Neural Networks to analyze existing motion capture data for patterns and information relating to style within and across performers. This analysis aims to further our understanding of how performer movement impacts our understanding of musical performance. 

   

Posters & Demos (10:20-11:40) / Posters et Démonstrations (10:20-11:40)

1. Finn Upham: Piece vs performance: Comparing coordination of audiences' physiological responses to two different performances of Arcadelt's Il bianco e dolce cigno

What aspects of music determine listeners' emotional experience? By looking at physiological responses to two different performances of the same work, we consider which shared responses might be due to the common musical structure and which might depend on the performers' interpretations of the work or other differentiating conditions. The responses from two audiences observing different performances of the the same work were recorded continuously. To quantify the audiences' response to the music, we look at the coordination of strong response across each audience in features extracted from the four biosignals: galvanic skin response (GSR), electromyography activity of the zygomaticus (EMGz), electromyography of the corrogator (EMGc) and blood volume pulse (BVP). When both audiences share coordination behaviour, this may be caused by the similarities between the two stimuli. When the audiences show distinct patterns of coordination, their respective responses are more likely due to aspects of the stimuli which differ.

 

2. Sibylle Herholz et al: Neuronal correlates of imagined and perceived tunes

Auditory imagery can be a very vivid experience, and it has previously been shown that imagery and perception have similar neuronal correlates. I will show MEG and fMRI findings indicating that neuronal correlates of musical imagery are related to behavioral measures of musical imagery, to long-term musical training and to vividness of auditory imagery. A first MEG study conducted in Münster with C. Lappe, A. Knief and C. Pantev used a novel imagery task in which participants had to imagine parts of familiar melodies in their minds and then had to judge if a tone after the silent imagery interval matched the imagined melody or not. Musicians performed better than nonmusicians in this task and showed a stronger neuronal response within auditory cortices to unexpected tones, suggesting that they had more precise mental images of the tunes. The observed response based on imagined tunes was similar to the classic mismatch negativity, which indicates that imagery activated similar neuronal networks within secondary auditory cortex. In a recent fMRI study, conducted in Montreal with R. Zatorre and A. Halpern, we were able to show more directly that both during imagery and perception of familiar tunes anterior and posterior parts of secondary auditory cortices were active. Furthermore, comparing imagery to perception revealed an extensive cortical network including prefrontal and supplementary motor cortex, intraparietal sulcus and cerebellum. A right-hemispheric network of prefrontal cortex, anterior cingulate and superior temporal gyrus was especially active during imagery in participants with very vivid auditory imagery. An additional goal of the experiment was the investigation of episodic memory for the different mental states of imagining and listening to tunes. During a recognition test for the tunes that were either heard or imagined earlier, left secondary auditory cortex was active in addition to areas involved in episodic memory retrieval. Also, a left-lateralized fronto-temporal network was associated with vividness of imagery, indicating that auditory imagery is involved in this task, in line with most participants reporting an imagery-based strategy. In summary, these results add to our understanding of functional correlates of perception, imagery and memories of familiar tunes, and their relationship with long-term musical training and with the subjective vividness of mental imagery.

 

3. Meghan Goodchild: The Role of Orchestral Effects on Emotional Responses to Music

This project explores the concept of peak experience, which consists of two or more coordinated affective responses, such as chills, tears, emotions, awe, and other reactions. Previous research suggests that timbral contrasts (e.g., sudden shifts in orchestration) induce emotional responses in listeners. Musical stimuli were chosen to fit within four categories defined by the researchers based on instrumentation changes: gradual or sudden addition, or gradual or sudden reduction in instruments. Forty participants (20 musicians and 20 non-musicians) listened to the orchestral excerpts and continuously moved a slider to indicate the intensity of their emotional responses. They also completed questionnaires outlining their specific subjective experiences (chills, tears, awe, action tendencies, and other reactions) after each excerpt. We will discuss response patterns specific to the various musical parameters under investigation, as well as consider individual differences caused by factors including musical training and familiarity with the stimuli.

 

4. Daniel Donelly and Andrew Hankinson: An annotated data set for optical music recognition systems development

Music documents present a significant challenge for automated analysis systems. While humans can very easily learn and adapt to new notation symbols, computer systems require explicitly labelled examples of these symbols to achieve an acceptable level of precision for recognizing and interpreting them from a page image. This is an expensive and labour-intensive process requiring skilled human intervention. While some training sets do exist they are largely developed ad hoc, often using convenience sampling to select documents for inclusion into the set. A need exists for a training set drawn from a wide variety of document sources. We will present work towards developing a corpus of printed sixteenth-century music sources to be used as a standardized training set, including our rationale for our document selection process with the specific focus of ensuring a broad range of sources. We will also introduce a method for sharing this data across multiple software systems. 

 

5. Steven Phillips: Interesting applications of composite materials to musical instruments

The advent of composite materials has led to superior performance in many applications ranging from sporting equipment to aircrafts. The high stiffness, directional properties and robustness of these materials also makes them an attractive option to wood used in the building of violins and other musical instruments. Consequently, there has been much research related to bringing their physical properties closer to wood so that they are better suited for these applications. The aim of the present study was to maximize the loudness of the violin by taking advantage of the high stiffness of unidirectional carbon fibre/epoxy composites. Furthermore, the aim was to investigate other novel applications to musical instruments including baroque flutes and cello fingerboards. It is expected that this work will lead to a better understanding of the design limits of composite materials in violin building and other novel musical instrument applications.

 

6. Jason Hockman: Fast vs slow: Learning tempo octaves from user data

The widespread use of beat- and tempo-tracking methods in music information retrieval tasks has been marginalized due to undesirable sporadic results from these algorithms. While sensorimotor and listening studies have demonstrated the subjectivity and variability inherent to human performance of this task, MIR applications such as recommendation require more reliable output than available from present tempo estimation models. In this paper, we present a initial investigation of tempo assessment based on the simple classification of whether the music is fast or slow. Through three experiments, we provide performance results of our method across two datasets, and demonstrate its usefulness in the pursuit of a reliable global tempo estimation.

 

7. Cédric Camier et al: Aircraft cabin sound field reproduction based on vibro-acoustic model and inverse method

The presented work deals with a CRIAQ multidisciplinary, industrial and academic project of aircraft cabin sound environment reproduction in a real mock-up. The complete measurement, treatment and reproduction methods bring together BOMBARDIER, CAE, CIRMMT and GAUS expertise in in-flight noise analysis, flight simulators, perception evaluation and sound field reproduction. In this presentation, emphasis is on sound field reproduction strategy. Two reproduction systems, based on actuators or on loudspeakers, are simulated in order to compare their feasibility and performance. Sound field reproduction simulation performances are evaluated on the basis of reproduced pressure error minimization at a microphone array positioned in simplified modeled cabin. Inverse method linking exciter strength with pressure on the spatially extended array region is developed in a matricial form and shows promising results. Moreover, modal analysis of the vibro-acoustic model used in the reproduction system offers good hints regarding the parametrization of real dedicated systems.

 

8. Sven-Amin Lembke: The relevance of spectral shape to perceptual blend between wind instrument timbres

Previous studies have suggested the perceptual relevance of stable spectral properties characterizing the timbre of orchestral wind instruments. Analogous to human voice formants, the existence of stable local spectral maxima across a wide pitch range has been reported for these instruments. Furthermore, agreement of these formant regions between instrumental sounds has been suggested to contribute to the perception of blend between timbres. Our aim is to verify and validate these hypotheses based on a two-stage approach comprising acoustical analysis and perceptual testing. Spectral analyses are computed on a broad audio sample database across all available pitches and dynamic levels. Based on the obtained spectral information, partial tones are identified and their frequencies and amplitudes used to build global distributions of partials across all pitches and between dynamic levels. A curve-fitting procedure applied to these distributions then yields spectral profiles from which characteristics such as formant regions are identified and described. This can be complemented by signal processing techniques such as linear-predictive-coding or cepstrum analyses to attain parametric expressions for spectral shape. The second stage (scheduled after May 2011) takes obtained formant characteristics and tests their perceptual relevance in an experiment employing a production task. Results from these two stages will provide a clearer picture of what perceptual blend corresponds to acoustically and would furthermore help explain its usage in orchestration practice.

 

9. EP Trio (Eliot Britton, Erika Donald and Benjamin Duinker): 3shot

The EP trio is a fixed chamber ensemble dedicated to research, creation and performance in live electronic music. Comprised of augmented digital drum kit, electric cello with sensor-enabled K-Bow, and turntable-based electronics, the trio blends contemporary electroacoustic and electronic music sensibilities. This presentation examines the group’s decisions relating to physical set-ups and control capabilities, sonic identities, and mappings of each instrument, as well as their roles within the ensemble. The challenges encountered in fusing three disparate and highly flexible instruments into a coherent, musically expressive ensemble will be considered. The trio will perform a work by Eliot Britton and discuss the creative, rehearsal and performance processes and emerging performance practice involved.

 
 

Keynote Address (3:00-4:00)/ Conférence invitée (15:00-16:00) 

Professor Stephen McAdams, Canada Research Chair in Music Perception and Cognition, CIRMMT, McGill University: Measuring listening through time: Cognition and emotion

Theories of musical materials and forms only rarely take into account how listening evolves through time during a piece of music. We remember some things and not others. At times our attention is drawn by some events and at others we willfully focus it on a particular melody, rhythmic pattern or instrument. What we understand or feel at a given moment often depends on what we have retained in memory from earlier moments and what we may project to future moments in anticipation of things to come. Music also has the power to take us through a wide range of emotions in a fairly short period of time. All of these psychological processes dynamically sculpt our experience in time, driven both by the music we are listening to — its structure and interpretation by the performers — and how it relates to our past musical experience, training, listening habits and æsthetic preferences. The aim of this presentation is to explore how we might probe what I call the cognitive and affective dynamics of music listening with the methods of cognitive psychology and psychophysiology, in order to understand why the musical experience is so rich and so individual. Evidence from a variety of sources will be reported, including in-concert monitoring of real time response from large numbers of audience members using both portable IT devices and arrays of biosensors.