CIRMMT Student Symposium 2014 - Abstracts

To view the list of presenters: CIRMMT Student Symposium - List of Presenters

General Assembly and Student Symposium 2014 overview

Oral Presentations (9:30-10:15am & 2:00-4:15pm)/ Présentations orales (09:30-10:15 et 14:00-16:15)

9:30 - Andie Sigler: Harmonic and tonal analysis with pitch class sets

We can efficiently find all temporally contiguous musical segments containing n pitch-classes, for n = 1 to 11; called pcns. Pcns afford computational applications in transformational pc-set analysis following e.g. Forte and Lewin. We demonstrate applications to tonal music analysis. "Key-assertions" are pcns that are subsets of just one of the diatonic sets. Key-assertions give an overview of tonal areas in a score, including secondary and "borrowed" regions. A high-performing pitch-spelling algorithm is developed as an application. Triadic pcns show all triadic chords in a score -- these may overlap, contain extra notes, etc. To preserve pc-content information and style independence, we avoid interpretation (or pruning) at this stage. We read the triadic pcn label as a licence to see a triad, should it become useful for future analysis. We show how to navigate the graph of triads to find progressions of interest and produce a tonal interpretation. 

 

9:55 - Jason Noble & Chelsea Douglas: Soundmass and auditory streaming: A perceptual study of Ligeti's continuum

The term “soundmass” encompasses music of radically different characteristics — rhythmically static or kinetic, timbrally homogeneous or heterogeneous, pitched or unpitched. A complex stream may be perceived as a mass due to psychoacoustic phenomena such as masking, fusion, and spectral density.

An empirical study aimed to reveal whether soundmass perception is unanimous across all listeners and which musical parameters facilitate its perception. We expected mutually interactive parameters, such as timbre, rhythmic density, and periodicity length to influence perception of soundmass. Ligeti’s Continuum provides excellent material to study this phenomenon.

The stimuli consisted of MIDI versions of Continuum for harpsichord, piano, and organ. Participants rated the degree to which they perceived soundmass on a slider interface. Soundmass was defined as an auditory stream that is audibly complex; simultaneous parts may be perceived, but do not further segregate into individual streams. 

Soundmass can be perceived consistently across listeners. Attack envelope and spectral composition of certain note patterns cause rating differences between instruments. The harpsichord, which has the sharpest attack, had consistent lower perceived soundmass ratings than the others, which had smoother attacks. When numerous high partials were prominent in the organ, but not the piano, soundmass ratings were significantly lower for the organ. 

 

2:00 - Robert Landon Morrison: Acousmographe VE Analysis: A comparative study of graphic representation tools based on an aural analysis of Philippe Leroux's M.É.

Acousmatic music poses a perplexing problem for the analyst – not only does it lack a notated score, it also utilizes a seemingly infinite sound palette made possible through the use of modern technology. As a result, the analyst must employ new tools capable of tackling musical issues that resist traditional theoretical approaches. Toward this end, my presentation will assess recent developments in the field of electroacoustic music analysis by examining two software packages – Acousmographe and EAnalysis – both of which aim to offer powerful multimedia toolkits, such as visual sonograms and the ability to create graphic musical representations. In order to facilitate a side-by-side comparison, I have conducted an aural analysis of Philippe Leroux’s acousmatic work, M.É., using both software applications. Based on the results of this case study, I intend to address the analytical implications of each program by bringing the two platforms into dialogue with one another and offering an honest appraisal of their various strengths and weaknesses.  

 

2:25 - David Sears: Statistical learning and the perception of closure: How IDyOM models cadences

In recent years researchers have suggested that the perception of a temporal boundary results from the cessation of listener expectations following the end of a perceptual group. This claim has led to a number of computational models that quantify listener expectations by employing mathematical principles derived from statistical learning, probability estimation, and information theory. To this point, however, expectancy-based models that examine the variety of cadences and other closing formulae that appear in Western tonal music are few and relatively recent. 

Using the Information Dynamics of Music model (IDyOM), which predicts the next event in a musical stimulus by acquiring knowledge through unsupervised statistical learning of sequential and simultaneous structure (Pearce, 2010), we predict the events at cadential arrival for cadences derived from a corpus of 50 sonata-form expositions from Haydn’s string quartets to examine how the formation and fulfillment of expectations may contribute to the perception of cadential closure. 

 

2:50 - Cédric Camier: Refined sound field rendering tool dedicated to computer-assisted composition

This project aims at producing a physically and perceptually based sound-field rendering tool adapted to the electro-acoustic and mixed composition. Following the insights given by N. Peters [1] in terms of spatial aspects of compositional practices, the software will be integrated as a plug-in, controlled by an external interface and in pseudo-real-time. The originality of this work is based on the implementation of research results in terms of acoustic imagery and sound field environment restitutions into a composer-dedicated tool. The auditory scene specified by the user will be independent from the loudspeaker geometry and Acoustical maps of the sound field truly rendered graphically, will be added to the visualization tools to guide the user in his spatialization strategy, relative to the spatial characteristics desired.  This paper presents the scientific and algorithms background and the state of progress of this long-term project.  

 

3:30 - Robert Giglio: Stoss vs. Prell: Natural selection in the workshops of late eighteenth-century Vienna

Among the many fortepiano workshops of late eighteenth-century Vienna, two different types of mechanism coexisted: the Stossmechanik and the Prellmechanik. By century's end, the Prellmechanik became the preferred choice for Viennese grand pianos and was later recognized as the “Viennese action.” This project answers a simple question: Why? My hypothesis is that the Prellmechanik allows for a greater possible range of dynamic input, which leads to a more varied set of hammer-shank deformations; in turn, these deformations allow for a more variable hammer-head trajectory and a more variable area of contact with the strings, which makes the Prellmechanik more capable than its counterpart of producing varied “attacks” and therefore varied articulations.

The coexistence of stoss and prell in Vienna has been a subject of increased interest for organologists – especially since Michael Latcham (1997) established that Mozart’s piano originally contained a Stossmechanik and that this mechanism was eventually replaced by a Prellmechanik. Thus, Alfons Huber (2002) traced the development of each mechanism and provided an account of their existence in Vienna; Tom Beghin (2005) made a groundbreaking recording on a newly built Anton Walter Mozart-piano by Chris Maene, capable of housing both actions; Beghin (2008) went on to examine the differences between these mechanisms in terms of touch, sonic attributes, and musicianship. To date, however, no scientific differentiation – of the kind pursued for two different prell-actions by Stephen Birkett (2010) – has been completed. Using the unique Mozart-piano replica by Chris Maene as well as a recently constructed model of its two mechanisms, my project aims to scientifically differentiate stoss from prell. I will present the results of high-speed video capture and wave-form/spectral sound analysis and will provide possible reasoning for the selection of prell over stoss in Vienna.


3:55 - Robert-Eric Gaskell: Subjective and objective evaluation of distortion in analog electronics: Capacitors and operational amplifiers

The electronic signal processors used in record production are, themselves, musical instruments that affect the character of recorded sound. The circuits in this equipment are complex combinations of electronic components (resistors, capacitors, diodes, etc.), each component having the potential to influence overall sonic quality.  For the purposes of informing design and physical modeling of audio electronics, the objectively measurable characteristics of two common electronic components, operational amplifiers and capacitors, were correlated with the results of subjective listening tests and it was shown that the nonlinear characteristics of both of these component types have the potential to significantly affect listener perception of sound quality.  Models of the nonlinearities in the capacitors and operational amplifiers were developed and used in further listening tests to determine the nature and range of their subjective effects. 

Demos & Posters (11:20am-1:00pm) / Démonstrations et Posters (11:20-13:00)

DEMONSTRATIONS & POSTERS:

1. Marcello Giordano, Martha Shiell and Pauline Tranchant: Beat synchronization in deaf people using vibrotactile stimulation

Anecdotal reports suggest that congenitally deaf people enjoy dancing to music, which must be experienced through non-auditory senses, such as vibrations felt through the floor. Although some research has investigated how vibrotactile stimulation can convey musical content [Marshall et al. 2011, Giordano et al. 2011], little is known about the ability to use this information for beat synchronization. We built a vibrotactile display: loudspeaker-like vibrating actuators [Yao et al. 2010] embedded into a wooden platform. The platform provides beat information about a corresponding musical audio track, via vibrations produced by the actuators, which are driven by a synthesized "tactile translation" [Birnbaum et al. 2007] of the audio track. Frequency response compensation techniques [Marshall 2008] ensure that the vibrations of the plank are consistent with the vibration normally experienced from a wooden floor. In a pilot task, hearing participants synchronized a continuous full body bouncing motion and discrete finger tapping to the perceived beat information. These movements were measured with a motion capture system and analysed for synchronization accuracy. Further experiments will compare this accuracy between deaf and hearing people. 

 

2. Ian Hattwick and Preston Beebe: Unsounding Objects: Audio feature extraction for control of sound synthesis

The goal of Unsounding Objects is to create digital musical instruments which use audio feature extraction for the control of sound synthesis. The first instrument we developed, the Spectrasurface, consists of four surfaces equipped with contact mics. When objects placed on these surfaces are manipulated audio signals are created upon which audio feature extraction is performed. For our current research we refined the interface and software for the Spectrasurface and a second composition was created. In addition, we expanded the analysis, mapping, and synthesis techniques developed for the Spectrasurface to wind and string instruments.  

 

3. Francesco Tordini: Measuring and perceiving loudness. Lessons learned from streaming natural sounds

Loudness is a mature term in the audio domain, but its perceptual definition and measurement techniques are dynamic and ever changing. This work compares the current best practices of measurement of loudness used in the broadcasting industry with some of the models coming form the psychoacoustics and psychophysics research.

We use the same set of natural sounds to evaluate the results of these techniques with respect to the data gathered by two perceptual rating tasks. 

 

POSTERS:

4. Benjamin Bacon and Chris Smith: Percussive gesture in intermedia performance

Benjamin Bacon and Christian Smith's 2013-2014 CIRMMT student award project was to investigate the practical and creative possibilities of gesture in intermedia percussion performance. Most intermedia (multi-media works) require the activation of electronics through the use of a button or trigger device. This project was focused on giving a percussionist the possibility to use a physical gesture to control the activation of a scene, cue, or other intermedia event. Given the physical demands of contemporary percussion music, the researchers wanted to shed light on the possible advantages and/or disadvantages of the gestural control of intermedia events in percussion performance. 

 

5. Anthony Bolduc: Sound field reproduction of vibroacoustic models:Application to a plate with wave field synthesis

In an engineering context, objective evaluation of vibroacoustic models is traditionally performed with visual or numeric information. However, actual auditory perception cannot be transmitted through these types of objective representation. Sound Field Reproduction (SFR) of sound fields emitted by physical objects are often based on simplistic point source models with modified radiation properties or from recordings, using stereophonic or binaural techniques. In the perspective of better perceptual evaluation of engineered products, it would be useful if such methods would not be limited to certain types of sources, modeling techniques or predefined listening spots. A general SFR method using Wave Field Synthesis formalism applied to common vibroacoustic models as found in mechanical engineering is proposed. SFR applied to an analytical model of a harmonic or broadband excited plate is studied using three Secondary Source Distributions geometries. Results of numerical simulations illustrate the viability and limits of the approach.

Alain Berry at University of Sherbrooke is my director and I am in my second year of a M.sc.A. degree. 

 

6. Olivier Gagnon: Investigation of the influence of harmony on the perception of emotion

This study aims to investigate the influence of various harmonizations on the perception of emotions in music. We hypothesize that (1) a more dissonant harmonization and (2) a harmonic system based on intervals smaller than a major third, will tend to lead to the perception of more negatively valenced emotions. To verify these hypotheses, we have composed short musical pieces that evoke five basic emotions-- happiness, tenderness, sadness, fear and anger -- based on musical parameters surveyed in Juslin (2001) and shared acoustic cues for emotions in speech and music (Juslin & Laukka, 2003). These pieces are derived in three different harmonizations based on the superimposition of intervals of the same kind : (1) thirds/sixths (tonal/modal harmonies), (2) fourths/fifths and (3) seconds/sevenths. Finally, perceptual tests will soon be conducted to conclude this study. 

References 

JUSLIN, Patrick N. (2001): « Communicating emotion in music performance : a review and theoretical framework». JUSLIN, Patrick N. & Slobodda, John A. (2001) Music and emotion: theory and research. Oxford : Oxford University Press. 

JUSLIN, Patrick N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, vol. 129, no. 5, p.770-814. 

 

7. Telina Ramanana: Sound reproduction by beamforming capture and wave field synthesis

In this paper, the reproduction of industrial or vehicle sound environments is investigated. The method relies on the combination of microphone array recording, beamforming, Wave Field Synthesis (WFS), bass equalization. The approach is based on fixed looking-direction beamformers to separate signals depending on their incoming direction. The beamformer signals are then processed by WFS in order to recreate the sound environment as multiple plane waves. A theoretical parametric study based on the number of beamformer steering directions is provided. Comparisons of the target and reproduced sound environments are reported in order to find appropriate system parameters for the reproduction of sound environments as found in vehicle or industrial contexts. 

 

8. Daniel Steele: Spreading soundscapes: a practical study on closing the gaps between soundscape researchers and urban designers

Evaluations of soundscape are significantly modified by context, such as the person making the evaluation, their mood when they make it, or the activity they are engaged in. This gives rise to important implications in the planning and design of urban spaces, but urban planners and designers can be limited by existing resources that take a traditional and negative noise-reduction approach rather than a positive soundscape approach. Semi-structured interviews with urban planners are conducted to understand their conceptualizations of urban sound and how those concepts operate in the context of their work. We aim to reach urban planners in a more accessible discourse about soundscape knowledge, such as the role of activity, to achieve better outcomes for soundscape design. 

Keynote Address (10:20-11:20am) / Conference Invitée (10:20-11:20)

10:20 - Daniel Trueman: Scordatura: on Re-mapping the Body to Sound (and vice-versa)

Scordatura, or mis-tuning, has a long history in string music, from Biber to Bach, to Stravinsky, Ligeti and others and also in the many fiddle “cross-tunings” of the world. In this talk, I will explore how the notion of scordatura is a powerful tool for generating creative spaces, not only for stringed instruments, but for musical instruments old and new. The assumptions that the very name “scordatura” implies (in particular, the hard-earned embodied knowledge that is dependent on an accepted standard) raise challenging questions for new digital instrument design, and suggest that digital instrument building itself might be a kind of compositional and performance practice.