CIRMMT Student Symposium 2017 - Abstracts

To view the list of presenters: CIRMMT Student Symposium - List of Presenters

General Assembly and Student Symposium 2017 Overview

9:30-11:00: ORAL PRESENTATIONS / PRÉSENTATIONS ORALES

9:30 - Jason Nobel, Eddy Kazazis: Towards a perceptual chordal space: an empirical study on auditory preference rules

Hasegawa has suggested that complex harmonies, such as those used in “atonal” music by Schoenberg and later in “spectral” music by Grisey, can be analyzed as upper partials of a hypothetical virtual fundamental. Kazazis and Hasegawa built upon this idea and suggested that any complex chord can be mapped to a unique position in a three-dimensional space whose axes represent chordal qualities, namely “chordal hue” (virtual or actual fundamental), “chordal saturation” (roughly corresponding to inharmonicity), and “chordal brightness” (chordal centroid). We evaluate the perceptual relevance of this three-dimensional chordal space, and its ability to predict listeners’ responses to complex harmonies drawn from the 20th and 21st century repertoire. Participants rate a presented chord according to: the extent of its “rootedness” for a set of pre estimated virtual fundamentals; the amount of coherence or incoherence between its constituent notes; and its pitch height in a presumably holistic mode of listening. The resulting data are analyzed for estimating correlations along the dimensions of chordal hue, chordal saturation, and chordal centroid. These quantifications also provide a different analytical insight on Noble and McAdams’s previous studies on the perceptual difference between “chords” and “sound masses,” which showed that density alone is not a sufficient predictor of perceptual fusion. Finally, a brief introduction to compositional applications of the chordal space is given.

10:00 - Zored Ahmer: Activity based music recommendations

For this project, we create a music recommendation system that considers current user activity. More specifically, we aim to use the address (url) of the web page the user is visiting to guide their music playlist. We plan on doing this by creating a configurable browser extension.

We start by having a user create playlists for browser based activities they partake in. A user might make one for ‘Work’, one for ‘Play’ and one for ‘Reading’. The user then customizes each playlist with the websites they consider to fall into that activity, seed information and tuneable parameters (such as danceability). The we use the Spotify Recommendation API. The Recommendation API accepts a seed (artist, track, genre) and optionally tune-able parameters (such as danceability) and returns a list of songs. We play one of these songs and if the user likes it (denoted by how long they listen to it before skipping), the song is used as part of the seed. If the user switches their active website and the new website is part of a different playlist, then we switch the recommendation seed and tuneable parameters accordingly. This way, the extension effectively maintains several different playlists and the user’s browser activity switches between them. The extension will include a UI that allows for the configuration of all tuneable parameters per playlist. This will facilitate fine grained tuning of playlists.

We hope that by creating this browser extension, the day to day activity of users can be better accounted for and more enjoyable music recommendations given to them.

10:30 - Juan Sebastian Delgado, Alex Nieva: Multimodal visual augmentation for cello performance

Visual observation of performance gesture has been shown to play a key role in audience reception of musical works, particularly in experimental and new creations. As author Luke Windsor points out, “the gestures that ‘accompany’ music are potentially a primary manner in which an audience has direct contact with the performer.” This project proposes to augment audience perception through the creation of a visual display that is responsive to performance gesture. Our main goals are: A) to control the interactive lighting (visual display) by mapping the gestural information collected; B) to use the interactive lighting to augment interpretation of the musical text; and C) to make performance practice decisions in parallel to designing the lighting/gestural interface.

For this, we have used motion and relative position tracking sensors to gather information from the performer gestures and map it to a visual display consisting of an array of addressable light-emitting diodes on the surface of a cello. We investigated meaningful gestures with the optical motion capture system at CIRMMT and analyzed the performance techniques to choose the most convenient sensors. In addition, we performed audio feature extraction with a piezoelectric sensor to fusion the data and to convey the final information to be mapped. The system is based on Wi-Fi capable micro-controller boards programmed with mapping algorithms that control full-duplex communication among 3 of them and create the visuals that display a variety of lighting effects controlled by information derived from the sensor system.

In order to put this project into practice as well as to disseminate and promote the use of new technologies in the performing arts, an original piece by composer Luis Naon (Paris Conservatory/IRCAM) was written for the purpose of this project. “Pájaro contra el borde de la noche” for solo cello, ensemble, and electronics, adapted also for solo cello and electronics explores an array of performing techniques. As a result, these different techniques allowed us to have more material for extracting meaningful gestural information and thus, developing a comprehensive visual display to amplify the musical experience.

11:15-12:30: CIRMMT STUDENT AWARD LIGHTNING ROUND

11:15 - Pierre Grandjean: Spherical double-layer microphone array based on ambisonic approach and Lebedev grid for spatial audio: design, fabrication, and testing

GAUS, Groupe d’Acoustique de l’Université de Sherbrooke, Sherbrooke, Canada.

CIRMMT, Centre for Interdisciplinary Research in Music, Media and Technology, McGill University, Montréal, Canada.

Spatial audio in multimedia, digital arts as industrial applications, is in a boom. After a half century dominated by only one principle, stereophonic illusion between two loudspeakers, many alternative solutions appear and are improved to reproduce, or synthesize, spatially sound fields. Thus, 2D audio systems as 5.1 or 7.1 have been conquering our living rooms and theatres. In addition, 2D and 3D loudspeaker arrays based on different principles, as Wave Field Synthesis (WFS), High Order Ambisonic (HOA) or Directional Audio Coding (DirAC), are sufficiently advanced to reach a wider public. Reproduced sound fields in 3D require an audio digital processing creation or audio recording in 3D. In the latter case, recording solutions developed for 5.1 or 7.1 are not accurate enough to induce elevation rendering. Moreover, commercial ambisonic microphones that can record 3D sound fields do not exceed order 3, while research on HOA goes up to order 5. Based on Pierre Lecomte PhD, this project deals with prototyping an ambisonic microphone. A 50-node discretization on a sphere, according to Lebedev grid, enable microphone array to record at ambisonic order 5. In order to compensate frequency limitations, combination of two concentric 50-node microphone arrays, rigid and empty, respectively, is investigated. An introduction of 3D sound field recording uses and applications will be provided. Ambisonic theory is then briefly presented. The theory is applied to the specific cases of rigid and empty spherical microphones arrays. Advantages of Lebedev grid discretization and double-layer combination are described. Finally, the steps to achieve this project and preliminary results are shown.

11:30 - David Rafferty: Strutted cell - exploring spatialization as a compositional parameter

Spatialization as a compositional parameter in multi-speaker environments opens many doorways to new possibilities in music research, while simultaneously highlighting several important questions. How can it be used as an effective compositional tool? On a technical level, in which manner will the spatialization be implemented in a space? How would one capture such an acoustic space in a recording that would emulate these environments accurately? It is clear that spatialization as a compositional parameter reveals a deeper problem, not just for the composer, but also for the sound engineer. It is the task of this research project to explore these questions in a compositional piece.

11:45 - Johnty Wang: Development of a mapping interface for alternative control of pipe organs

This project supports the conception, implementation and use of a system that interfaces with MIDI enabled pipe organs in churches. In essence, the sound producing system of the organ becomes the "synthesizer" component of a digital musical instrument where the input and mapping system can be arbitrarily chosen to correspond to any performance gesture. The system is intended to be used by artists and composers realizing new works of interactive performance that involves gestural control of the instrument using alternative controllers.

12:00 - André Martins de Oliveira, Katelyn Richardson: The kinetic-kinematic-physiological-musician (KKPM) database project

Though there has been recent growth in the field of musicians' wellness, researchers lack resources on clear kinetic, kinematic, or physiological data (KKP) collection methods or an easily accessible body of data with which to compare their results. The goal of the Kinetic-Kinematic-Physiological-Musician (KKPM) Database Project is twofold. In the short term it aims to design and implement a database which will contain a compilation of key measurements of musicians including a wide variety of KKP data, particularly those related to performance and musculoskeletal disorders. Collected data will be standardized by the establishment of measurement parameters. Once collected, data will be allocated to a pre-set, protected file sharable between researchers. The long term goal is to maintain the database operationally and continuously add information so it becomes a reliable source and reference for future research.

Though measurement parameters will eventually encompass a wide variety of KKP data, the focus during the first year will be on sEMG collection on a set group of muscles per instrument (flute and violin) as well as postural assessment. In addition to facilitating research that can support the prevention of musculoskeletal disorders in musicians, the KKPM Database Project can potentially help to improve musical performance as well. With the aid of technology, knowledge that has been developed in other fields, such as kinesiology, can be transferred to music research; this work will hopefully lead to the implementation scientifically supported training methods.


12:15 - Matthew Boerum, Jack Kelly, Diego Quiroz: How do virtual environments affect localization and timing accuracy when panning audio sources three dimensionally

This research seeks to investigate the dependencies, variables and possible inaccuracies in sound source localization when presented through virtual reality (VR) using 3D control devices. Using the Oculus Rift, a high-quality virtual reality head-mounted display (HMD), we will evaluate how virtual environments affect a simple audio task such as 3D panning accuracy and duration. Our experimental method requires the use of CIRMMT A816 (semi-anechoic) room and 17 Genelec 8030 loudspeakers to create a 3D audio playback system. With an accurate 3D architectural model of A816 and photorealistic models of the Genelec 8030s placed in the same physical location as the real A816 setup, we can present a highly accurate, visual representation of the real world environment in virtual reality through the Oculus Rift HMD. Using both hardware rotary controls, and the Leap Motion hand gesture controller, we can present listeners with the ability to pan audio spatially in three dimensions within the 3D audio playback system. Localization accuracy will be tracked via the 3D panning software used to pan the audio sources within the 3D audio playback system. The test interface will also track the duration from start to finish for each localization-panning task.

12:30-1:45: LUNCH & POSTER AND AUDIO DEMOS / 12H30 - 13H45 : DÎNER & AFFICHES ET DEMONSTRATIONS D'AUDIO

Jeff Blum: Expressing human state via parameterized haptic feedback for mobile remote implicit communication

As part of a mobile remote implicit communication system, we use vibrotactile patterns to convey background information between two people on an ongoing basis. Unlike systems that use memorized tactons (haptic icons), we focus on methods for translating parameters of a user's state (e.g., activity level, distance, physiological state) into dynamically created patterns that summarize the state over a brief time interval. We describe the vibration pattern used in our current user study to summarize a partner's activity, as well as preliminary findings. Further, we propose additional possibilities for enriching the information content. 

Connor Kemp: Vibration behaviour of woodwind reed cane - player testing of in-use reeds and materials characterization
 
There are several well known inconsistencies that are frequently observed in woodwind reeds, specifically their variable stiffness (as rated by the manufacturer) and their changing behaviour with time. The present study was conducted to describe differences in static stiffness of as-manufactured reeds, each reed with the same geometry, hardness and stiffness rating. These static measurements were taken at 6 individual points along the tip and vamp of the reed to increase spatial resolution and capture potential asymmetric effects. In total, 8 reeds are measured and tracked over a 2.5 month time period. A professional musician played and tracked the reeds (in terms of stiffness) until they were deemed to be beyond their useful life. This allowed objective measurements of static stiffness to be tracked and compared with ratings of perceived stiffness. It is shown that initial static stiffness measurements differ between the reeds, despite an identical rating from the manufacturer. Differences are also observed between spatial positions along the tip. The musician is found to easily identify both very soft and very stiff reeds, when compared to objective measurements. In general, there appears to be a break-in period where the reeds continue to absorb moisture (as measured by reed mass), before stabilizing roughly one month into the experiment. Static stiffness measurements also vary with time, although changing reed masses alone do not account for the difference observed. These findings suggest that the long standing notion of high variability within a box of purchased reeds is founded in truth. Suggestions could be made to the manufacturer, including more rigorous classification of manufactured reeds, and potential humidity conditioning of reeds prior to packaging. The findings of this study will help musicians better understand the reeds they purchase and provide guidelines to manufacturers for selecting more consistent reeds, in terms of stiffness.

Cynthia Tarlao: Mind the moving music: auditory motion in experimental music
 
Inspired by the use of circular sound trajectories in contemporary music, this project aims at understanding how listeners track simultaneously moving sound sources. Building on previous studies on the upper limits of spatial hearing for a single moving source conducted at CIRMMT and the Multimodal Interaction Laboratory (by Féron, Frissen, Camier & Guastavino) using circular arrays of speakers controlled by a Max patch, we extended the investigation to multiple sound sources. How many trajectories can a listener track? Under which conditions (timbre velocities of the different sources; musical training of the participants)? Is it possible to fuse or split a sound pattern by modifying the rotation? Our results will shed light on the perceptual mechanisms at play in dynamic sound localization and could inform the treatment of space as a musical parameter in composition.

Arun Duraisamy: Understanding the acoustic behaviour of natural fibre composites and the effects of temperature and humidity

Natural fiber composites are currently replacing wood and glass fiber secondary structures in aerospace and automobile industries. With many traditional woods being listed as endangered, the music instrument manufacturing industry has started looking for alternate materials. This was followed by many carbon fibre instruments seen in the market. Carbon fiber proved to be excellent in certain aspects such as environment resistance and weight reduction but had less success with achieving good acoustic behavior. In this research, flax fiber composite, which are made from the fibers of the flax plant that are grown in large quantities in countries like Canada, are examined to see if they can be a better replacement. Fretboards in guitars are taken as the subject of interest. They are usually made out of Brazilian Rosewood and we try to mimic its acoustic behavior. First, Taguchi’s design of experiments method is used to identify the hierarchy of five different parameters (Ex, Ey, Ef, thickness and density) on the acoustic behavior. Second, the effect of temperature and humidity on the natural frequency and damping is studied for different fretboard samples (flax composite of 2 different grades, bamboo) keeping the Rosewood as the baseline.

1:45-3:15: ORAL PRESENTATIONS / 13H45 - 15H15 : PRÉSENTATION ORALE 5

13:45 - Karen Yu, Zihua Tan: The breathing canvas - an interactive performative installation
 
“The Breathing Canvas” is a collaborative performative installation work that involves Kenny Wong (media artist), Karen Yu (percussionist), and Zihua Tan (composer). The work is based on another media artwork of Wong, “The Canvas of Resonance,” that employs thunder sheets – together with vibration motors and flickering lights – to create an immersive installation piece. In 2015, Wong and the other two artists, who all came from various artistic and cultural backgrounds, convened and decided to expand the dimensionality of the work by adding a performance component and enriching the kinetic and performative possibilities with the help of devices such as distance sensors and vibration speakers. In the final form, Yu will perform a composition written by Tan for this artwork. The use of thunder sheets as a percussion instrument to yield orchestral effects dates back to Mozart’s time (as found in Die Zauberflöte). The idea was later adopted by the Foley artists to create thunder effects for films. In our project, we aim to push the limits of this instrument – which was previously intended to play a supporting role to the cinematic scenes it accompanies – by making it a highly interactive and performative solo instrument from which multifarious sonic and visual materials can be drawn. The innovative use of the instrument gives it a renewed purpose in the context of installation art and contemporary music performance practice of the 21st century. In our work, the instrument no longer serves to enhance any scene – the performance of it itself is the scene.

14:15 - Paris Alirezaee: Did you feel that? Developing novel multimodal alarms for high consequence clinical environments

Hospitals are overwhelmingly filled with sounds produced by alarms and patient monitoring devices. Consequently, these sounds create a fatiguing and stressful environment for both patients and clinicians. As an attempt to attenuate the auditory sensory overload, we propose the use of a multimodal alarm system in operating rooms and intensive care units. Specifically, the system would utilize multisensory integration of the haptic and auditory channels. We hypothesize that combining these two channels in a synchronized fashion, the auditory threshold of perception of subjects will be lowered, thus allowing for an overall reduction of volume in hospitals. The results obtained from pilot testing support this hypothesis. We conclude that further investigation of this method can prove useful in reducing the sound exposure level in hospitals as well as personalizing the perception and type of the alarm for clinicians.


14:45 - John Sullivan, Alexandra Tibbitts, Olafur Bogason: Harp gesture acquisition for expanded musical practice
 
This project is dedicated to the development of tools and techniques for realtime gestural control of live electro-acoustic performance by a solo instrumentalist. Inspired by Perry Cook’s principle of digital musical instrument (DMI) design - “Some players have spare bandwidth, some do not” - this project investigates the gestural affordances of harp performance. A concert harp is played with both hands and feet, leaving the performer with little spare bandwidth to dedicate to other tasks. Yet the harpist’s natural movement suggests an opportunity to map their gestures to parametric control of external audio processing within a live concert environment. The project has been conducted in three phases: a preliminary motion capture analysis of harp performance, development of hardware and software tools for gesture acquisition and mapping, and the creation of a new electro-acoustic work for solo harpist.

3:30-4:30: KEYNOTE ADDRESS / 15H30-16H30 CONFÉRENCE INVITÉE

15:30 - Tim Crawford: Learning to live with error? Some reflections on trying to use computers to do musicology

In the mid-1980s it looked as though the new "Personal Computer" could offer exciting new ways to do musicology. With experience, the realisation dawned that no part of that process did not contain unsolved problems, and it didn't take long come to an understanding that only by collaboration with others could anything at all be achieved. Getting interested people from different disciplines together in the same room to talk about music and how to tackle it was easier than I expected, though it's still unclear whether we were in fact always talking about the same thing. 

Arising in some degree from such conversations, with the addition of a certain amount of good fortune in grant funding, and coinciding with the nascent revolution in digital music distribution at the turn of the new century, the ISMIR conferences essentially catalysed a new hybrid discipline, Music Information Retrieval. But all the time, the question continues to nag: "What can all this do for a musicologist?”. 

The essential problem seems to be that music - at any rate, the stuff studied in musicology - resists the rigid categorisation and analysis axiomatic to MIR. While this is hardly a new observation, I would like to think that accepting this fact rather than regarding it as a "failure" is one of the main keys for success in getting computers to work their magic.