Student Symposium Abstracts - 2012

To view the list of presenters: CIRMMT Student Symposium - List of Presenters

General Assembly and Student Symposium 2012 overview

Oral Presentations (9:00-10:20am & 1:40-3:00pm)/ Présentations orales (9:00-10:20 et 13:40-15:00)

9:00 - Frédéric ChiassonKoechlin's volume interdisciplinary project: Results in scientific research, composition and computer-aided orchestration

Charles Koechlin's orchestration treatise (Traité de l'orchestration) ascribes different dimensions to instrumental and orchestral timbre than those usually discussed in timbre research. At the core of this system is the volume or largeness, related to extensity – the perceived size of the sound image –, and intensité, related to loudness or intensity. Koechlin gives a mean volume scale for most orchestral instruments, but scientific research have not found any evidence of a common perception of such scale.

In this presentation, I will discuss the results of the interdisciplinary project on Koechlin's volume. Research results of study by Chiasson, Traube, Lagarrigue, Smith & McAdams suggest that participants do hear volume as predicted by Koechlin's volume scale. Native language, musical training and hearing have no significant effect on results.  I will then explain how Koechlin's volume and intensity have been used in my orchestral piece Intercosmos, premiered on October 15th, 2001 by the Orchestre de l'Université de Montréal.

 

9:20 - Amandine Pras: World-renowned record producers reflect upon the future of the profession

Digital technology and Internet file sharing have led to the delocalization of professional recording studios and the decline of traditional record companies. In this context the profession of record producer is undergoing a transitional phase in terms of work environment and economical organization.

Findings will be presented from interviews of six record producers with exceptional portfolios and more than twenty years of experience reflecting upon the impact of recent technological advances on their careers and the future of their profession.

Interviewees reported that they appreciate working on a wider variety of projects than they did in the past, but they all discussed trade-offs between artistic expectations and budget constraints that are detrimental to the artistic quality of musical recordings. They expressed hope that the new generation will come up with multi-purpose structures combining social interactions of traditional studios with modern digital technologies.

 

9:40 - Patrick Saint-Denis: Musique mixte au-delà du paradigme électroacoustique/instrumental

La musique mixte nous a habitué, au cours des cinquante dernières années, à la coexistence de l'acousmatique et de l'instrumental au sein d'une même œuvre musicale. Les deux médiums reposant sur le sonore, leur union fait sens, et la démocratisation rapide de cette pratique depuis les vingt dernières années en est une démonstration manifeste. Mais qu'en est-il de la musique mixte au-delà du paradigme instrumental/électroacoustique? Existe-il des pratiques musicales mixtes dont l'objet de la mixité s'étendrait au-delà des frontières du sonore?

Si les pratiques musicales mixtes s'ouvrent graduellement vers des pratiques intermédiales, il semble que ces dernières soient surtout circonscrites à l'intérieur du paradigme selon lequel la musique contrôle ou enfin influence certains paramètres du média adjoint. Il est encore marginal de pouvoir constater le chemin inverse i.e. une situation où le média adjoint vient influer sur le musical.

C'est dans cette dynamique d'idées que je présenterai mon travail de création et de développement d'outils alliant, le son, l'image et le mécanique.

 

10:00 - Sibylle C. Herholz, Emily B.J. Coffey and Robert J. Zatorre: Short-term piano training changes the neural correlates of musical imagery and perception - a longitudinal fMRI study

Background and aim of the study: Musical training has been demonstrated to alter higher-order processing of auditory information, such as auditory-motor integration[1], melodic processing [2] and musical imagery [3] in comparisons of musicians and nonmusicians. Whereas previous studies have also shown effects of short-term piano training on neural correlates of auditory perception and auditory-motor networks [4, 5], short-term effects on mental imagery have not been investigated. In the present study we used functional magnetic resonance imaging (fMRI) to investigate training-related plasticity in the cortical networks for auditory processing and imagery.

Methods: Fourteen young adults with minimal previous musical experience were scanned three times, at six-week intervals: a baseline interval without training, followed by six weeks of piano training with five sessions of 30 minutes practice per week. The learning protocol consisted of a basic training of mapping of keys to sounds and playing simple tone sequences during the first four weeks, followed by two weeks of training of familiar melodies. Functional data were acquired in a sparse sampling design with four conditions: listen to familiar tunes, imagine them cued by the first tones of the song, listen to random tones as control condition, or rest in silence. Musical imagery performance was assessed using a task of judging correct or incorrect continuations of the melodies following the imagery interval [3]. Results: Participants were able to imagine the songs as evidenced by their above-chance performance (68% correct responses) on this task. Preliminary functional data revealed no changes during the baseline period without training, but increased activity post-training compared to pre-training in left premotor, prefrontal and parietal cortex for both listen and auditory imagery conditions. Comparison of the melodies used for training versus untrained familiar melodies (counterbalanced across subjects) revealed additional effects in parietal association areas.

Conclusions: The results of this short-term auditory-motor training study reveal plastic changes in areas related to motor processing and cross-modal auditory-motor integration. We demonstrate for the first time that similar changes occur within the cortical networks for perception and for mental imagery due to experimentally controlled piano practice. The findings have implications for our understanding of training-related brain plasticity in the auditory-motor network, and for the use of music training in neurological rehabilitation.

 

1:40 - Eric Smialek: Becoming the beast: Musical expression in the extreme metal voice

My project is a multidisciplinary investigation of musical expression in extreme metal with a particular emphasis on the interaction between vocal performance and musical form. Because extreme metal vocalists use stylized screaming techniques in place of pitched melodies, the acoustical characteristics of vowel formants become an especially important resource for musical expression in this vocal style. Drawing from linguistics-based research on phonetics, and using the signal processing and resynthesis capabilities of AudioSculpt, I have generated spectrograms of extreme metal vocals from both existing studio recordings and vocals recorded in the Critical Listening Lab. These images show vocalists sacrificing the intelligibility of their lyrics by exaggerating or warping their vowels in a way that enhances the impression of a large, inhuman sound source. Based on an analytical system I developed for studying form in extreme metal, I am currently investigating instances where these vowel changes occur in conjunction with climactic moments in musical form.

 

2:00 - SongHui ChonInvestigation of the effect of timbre saliency on instrument blending and voice recognizability in counterpoint music

Timbre saliency is defined as the attention-capturing quality of timbre. This project explores the perception of blending as a function of timbre saliency. Listeners were presented with a composite of two simultaneous, unison instrumental sounds varying in degree of timbral difference and saliency difference and were asked to rate the degree of blending on a continuous scale between "very blended" and "not blended". The results verified that what is salient tends not to blend well. They also showed that two low-saliency instruments generally blend better than two high-saliency instruments, even if both composite sounds have similar differences in saliency levels. Another experiment is in progress that will further study the effect of timbre saliency on voice perception in counterpoint music. A highly salient timbre is expected to enhance the voice recognizability.

 

2:20 - Andie Sigler: How to be an AI composer

There is an obvious cultural component to music -- an "empirical" level that must be learned from other music. Writing a sonata or a ragtime requires "outside" information about stylistic and formal markers.

We propose another, "rationalistic" level for musical composition where learning is not necessary. There are ideas available to any composer through reflection on basic musical materials.

This contrasts that in music which can be reasoned (computed) about a priori with that which can be better investigated after a computational language for describing musical structure is established.

We offer a model of how a computational "language of musical thought" could go. We base this language in the perceptual ecology of simplicity, suggesting that a first level of composing can be based in the emergent structure of simplicity, rather than the mash-up models of creativity currently in vogue. From basic logical principles, we show how to "recognize the obvious" in music much as a human listener would.

 

2:40 - Terri Hron: Locating the work and its meaning: searching for notation and common ground between instrumental performers and electroacoustic composers

In preparing new collaborative works with specialist performers using electronics, the question of notation and of the relationship between score and sound has been one of my main concerns.

I will present the experiences of performers who commission works for instruments and electronics (Michael Straus/saxophones, Dana Jessen/bassoon, Luciane Cardassi/piano and myself/recorders) and a few of composers whose works they have premiered (Peter Swendsen, Chantale Laplante, Paula Matthusen) in view of showing how collaborative creation aids in filling the gaps between practices and working towards a redefinition of notation in mixed electroacoustic music. This paper will be presented at the Electroacoustic Music Studies conference in June 2012.

    

Demos & Posters (11:20am-12:40pm) / Démonstrations et Posters (11:20-12:40)

DEMONSTRATIONS:

1. Mahtab Ghamsari & Beavan Flanagan: Study of mapping on performance development for digital musical instruments

The mapping of gesture to sound in acoustic instruments is guided by physical laws, (Hunt, Wanderley, Paradis, 2003) which impose constraints on the interaction. Digital instruments permit lifting of such constraints and designing the mapping independently. As such proper gesture to sound mapping design becomes vital to the performance experience of a new instrument. We hypothesize that in Digital Musical Instruments, complex mapping allows for a more engaging musical experience. This hypothesis is analyzed by studying two performers’ interaction on a novel interface (the Ballagumi) with a modal sound synthesis unit and two distinct mappings, one of which is complex and the other simple. Several short miniatures composed exclusively for the instrument and synthesis unit, serve as testing ground for the different mapping strategies. The study is purely exploratory; it reflects on the overall role of mappings in musical interaction, and will contribute towards further development of the Ballagumi as an engaging novel instrument.


2. Lauran Jurrius: High resolution surround recording: Mahler 3

In November 2011 we made a high resolution surround recording of the McGill Symphony Orchestra performing Mahler's 3rd Symphony. We used a lot of CIRMMT equipment and we got great results. I think it would be interesting to give a short talk about the technical setup and how it came to be and then have a listen to selections of the recording.

 

3. Benjamin Reimer & Marlon Schumacher: Corpus-based techniques for instrumental writing in computer-aided composition

In this talk we are going to present the current state of our research on corpus-based methods for instrumental writing in computer-aided composition.

We will first introduce the software library OM-Pursuit for corpus-based sound modeling within the computer-aided composition environment OpenMusic. This library allows a user to ‘transcribe’ a target sound into a symbolic score for a specific instrument via atomic decomposition.

In order to account for playability constraints by a human performer, we followed three complementary approaches based on the idea of ‘performance constraints’ that are applied before, during and after the decomposition phase (using an adapted matching pursuit algorithm implemented in pydbm by G.Boyes at McGill University).

To obtain quantitative data for pitched percussion performance, a number of specific musical exercises have been designed and performed on the ‘MalletKat’, a MIDI-controller for mallet percussion. We will show our current approaches, discuss future directions and conclude with a number of examples generated by the system and performed on the MalletKat by Ben Reimer.

 

4. Francesco TordiniTowards an augmented auditory saliency map. Experimental challenges and first results

The design challenges and the results of a series of pilot experiments are presented under the umbrella of the main research on auditory saliency maps. More specifically, the use of spatial, uninformative auditory priming cues to direct listener's attention is addressed and observations of the effects of auditory cueing with real everyday sounds are made. At this stage, only relative and subjective saliency pair-wise relations between sounds are considered. The design and rationale of the used audio corpus are discussed, together with the evolution of the test paradigm which finally focuses on an Hard/Easy recognition of sounds modification task, and its sensitivity to priming. Raw spatialization (simplified HRTF) is used throughout the tests. Its efficiency in terms of user spatial accuracy during 2-D localization tasks required by the first pilot tests is also considered.

 

POSTERS:

5. Audrey-Kristel Barbeau: The Performance Anxiety Inventory for Musicians (PerfAIM): a new tool to assess self-perceived music performance anxiety in popular musicians

Previous studies on Music Performance Anxiety (MPA) used existing measures that did not show appropriate psychometric properties for the assessment of self-perceived levels of MPA in popular musicians. I will demonstrate the steps undertaken to develop and validate a new questionnaire, the Performance Anxiety Inventory for Musicians (PerfAIM).  Content validity and face validity were established using focus groups and interviews.  I determined the internal consistency, the test-retest reliability, the concurrent criterion-related validity and the construct validity (convergent and divergent) using a sample of 69 popular professional musicians and music students.  The PerfAIM demonstrated an excellent internal consistency (Cronbach’s alpha=0.93), a very good reliability (ICC=0.89 with 95% CI), and a satisfactory concurrent criterion-related validity and convergent validity (Pearson product-moment correlation coefficient).  The PerfAIM is therefore an adequate measure to assess self-perceived levels of MPA in popular musicians.  No significant difference was found between men and women’s scores on the PerfAIM.  Further discussion will include anxiety level differences based on age, instrument types, and years of training.

 

6. Felix Fréderic BarilAudio Representations of Psychological States [2] - Audio examples

Recent years have borne witness to numerous advancements in research on the psychology of musical perception and the science behind perceptive mechanisms. CIRMMT’s Music Perception and Cognition research axis specializes in this field; research projects have studied the mnemonics of dissonances and consonances, brain response to musical stimuli, musical form perception, the neuro-anatomical reaction to music and speech, etc. Most of these research topics study the manner by which the listener is “absorbing” the musical information. 

The Audio Representations of Psychological States project offer to complement that research by proposing a theory of “psychological projections”. This is accomplished through the interplay between live room-simulation technology and its pre-recorded counterpart, in the context of a new work for soprano and ensemble. In the presented examples the insanity of Guy de Maupassant, as depicted in his short story Le Horla, is recreated in a surround environment.

 

7. Gregory Burlet: NEON.js: Neume editor online

As part of the larger Single Interface for Music Score Searching and Analysis (SIMSSA) project, NEON.js provides functionality to manipulate digitally encoded symbolic music notation via an online and open-source graphical user interface implemented in javascript. Currently, we are focusing on editing early neumatic (square-note) notation. NEON.js will serve as a component within an online, do-it-yourself, optical music recognition framework. The primary purpose of the neume editor is to provide an easy and accessible interface to correct note pitch and ornamentation errors made when an optical music recognition algorithm processes scanned scores. While editing, the underlying symbolic data, encoded in the Music Encoding Initiative (MEI) format, is transformed to reflect the changes made within the editor. Although primarily developed to edit existing digitally encoded musical scores, NEON.js remains extensible to create scores in neumatic notation from scratch.

 

8. Emily B.J. Coffey (1,2,3), Sibylle C. Herholz (1,2,3) & Robert J. Zatorre (1,2,3)Effects of long-term musical training on neuronal correlates of auditory imagery

The aim of the present study was to investigate the long-term effect of musical training on auditory imagery by comparing musicians and non-musicians using functional magnetic resonance imaging. Participants listened to the beginning and imagined the continuation of familiar melodies in the scanner. Functional data were acquired in a sparse sampling design. Preliminary results show that both musicians and non-musicians were able to correctly imagine the melodies as evident from their above-chance performance on the imagery task, but musicians showed better performance than non-musicians. During imagery, a cortical network encompassing auditory, motor and association areas was activated in both groups. However, groups differed regarding their activation in the supplementary motor area. This indicates an effect of long-term music training on the motor preparation network, which is involved not only in motor, but also in auditory imagery.

1-Montreal Neurological Institute, McGill University, 2-International Laboratory for Brain, Music and Sound Research (BRAMS), 3-Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT)

 

9. Meghan Goodchild: Peak emotional responses to orchestral gestures: An analysis of continuous and retrospective ratings

Recent research indicates that prominent changes in orchestration elicit emotional responses in music listeners. Orchestration and timbre are underdeveloped areas of music theory, particularly when compared with other musical parameters such as harmony, melody, rhythm and form. A clear taxonomy of techniques and a conceptual framework related to how they function are still lacking.

As a starting point for inquiry, twelve musical excerpts were chosen to fit within four categories of orchestral gestures defined by the researchers based on instrumentation changes that vary in terms of time course (gradual or sudden) and direction (addition or reduction). Forty-five participants (22 musicians and 23 nonmusicians) listened to the excerpts and continuously moved a slider to indicate the intensity of their emotional responses. They also completed questionnaires outlining their specific subjective experiences (chills, tears, and other reactions) after each excerpt. I will demonstrate different response profiles for the orchestral gestures using new visualizations of time-varying musical parameters (instrumentation, dynamics, tempo, and spectral properties) and I will consider individual differences among participants caused by factors including musical training and familiarity with the stimuli.

 

10. Philippe Hamel, Douglas Eck and Yoshua Bengio: Multi-timescale principal mel spectral components for automatic annotation of music audio

Automatic annotation is the task of applying semantic descriptors, or tags, to music audio. In other words, the goal is to learn how to describe, in words, the audio content of a given music clip. Feature extraction is a crucial part of any automatic annotation system. Good

features should be able to model low-level aspects of music audio such as timbre, loudness and pitch, but also higher-level aspects such as melody, phrasing and rhythm. Low-level aspects can be relatively well modelled by features computed over short-time windows. Higher-level aspects, on the other hand, are salient only at larger timescales and require a better representation of time dynamics. In order to obtain a better representation of time dynamics in music audio, we propose to compute general features at different timescales.

 

11. Mike Klein: Multi-voxel pattern analysis for decoding musical category representation in the brain

Multi-­voxel pattern analysis (MVPA) is a powerful new technique used in the functional neuroimaging field. Sometimes called “decoding” or “machine learning,” patterns of functional MRI brain data can be used to decode mental states in awake, healthy subjects. We used MVPA to examine the neural basis of categorical perception (CP), a process in which continuously-­varying physical stimuli are perceived as belonging to a limited number of discrete categories. Musically-­trained subjects, who had previously been shown to demonstrate strong CP of musical intervals, listened to minor, major, and perfect melodic intervals (over a variety of pitches) while fMRI images were recorded. Small spheres of voxels in the right superior temporal sulcus (STS) and the intraparietal sulci (IPS) were significantly predictive of which category of interval subjects were presented with. These results are indicative of a ventral “what” stream of information processing as well as dorsal circuitry involved in object normalization.

 

12. Brett Leonard and Padraig Buttner-SchnirerThe Objective & Subjective Differences in DAW Math

The subject of sound quality often arises when discussing the merits of various digital audio workstations (DAW¹s).  Many engineers argue that one DAW ³sounds better² than another, but very little open, objective data exists on the subject.  In order to test these claims, five DAW's are fed the same multi-track digitized audio from a single converter.  This audio is then processed by lowering all faders in each DAW by a fixed, arbitrary amount, generating five identical mixes, save the internal math performed through the gain change in each DAW.  These mixes are tested for discriminability by highly-trained listeners.  This testing reveals detectable differences between DAW, even when only simple processes are preformed.


13. Charalampos Saitis: Investigating consistency in verbal descriptions of violin preference by experienced players

We conducted content analyses on free-format verbal descriptions collected in a perceptual experiment investigating intra-individual consistency and inter-individual agreement in preference judgments by experienced violinists. In the experiment, 20 musicians played 8 violins of different make and age and were asked to rank them in order of preference (from least to most preferred), and provide rationale for their choices through a specially designed questionnaire. The responses were classified in semantic categories emerging from the free-format data. The relations between the various categories were examined. Based on these analyses, consistency in verbal descriptions of violin preference within and across violinists as well as within different discriminating categories (most versus least preferred violin) will be discussed. The effect of expertise (e.g., professional versus amateur musician, years of violin training) will also be examined. Finally, comparisons with nonverbal results (previously published) will be considered.

 

14. David Sears, David Weigl and Jason Hockman: Investigating beat salience: a reaction-time approach

This project investigates beat salience, a perceptual measure of the listener’s experience of the beat and its effects on the perception of various levels of the metrical hierarchy. While methods exist to approximate beat salience (e.g., pulse clarity, beat induction strength), they do not address the effect of beat salience on the perception of higher levels of metrical structure, nor do they take listener experience into account. We conducted a behavioural study adopting a reaction-time and a tapping task to identify those parameters which elicit a strong sense of beat when listening to music. The assumption underlying this approach is that a rhythmic structure conforming to listener expectations will result in faster response times, thereby indicating a priming effect. Unexpected events are theorized in the literature to increase cognitive load, thereby resulting in slower response times. We will use the results of this study as the basis for developing a causal computational model to dynamically predict the listener’s expectations as the auditory stimulus unfolds.

Keynote Address (10:20am & 3:20pm) / Conference Invitée (11:20 & 15:20)

10:20 - Professor Daniel Levitin, CIRMMT, McGill: Measuring Musical Expressivity

I'll report on three recent experiments in which we attempted to quantify and measure musical expressivity, forming a psychophysics of musical emotion. These experiments were conducted by my recent graduate students Anjali Bhatara, Anna Tirovolas, Eve-Marie Quintin, and Bradley Vines.


3:20 - Miller PucketteDesign choices for computer instruments and computer compositional tools

When designing tools for computer musicians there are at least three classes of considerations that must somehow be balanced.  Most obviously there is the question of what you want to have happen at the moment of performance (broadly defined).  Second, a well-designed tool should support the day-to-day workflow of developing the performance materials (such as a score or a real-time environment).  Finally there is a need for a sort of openness in support of interoperability with other tools, flexibility of application, and stability in time.  In this talk I'll consider some existing tools from these perspectives and try to draw lessons about how to think about new design problems.