CIRMMT Student Symposium 2013 - Abstracts

To view the list of presenters: CIRMMT Student Symposium - List of Presenters

General Assembly and Student Symposium 2013 overview

Oral Presentations (9:00-9:45am & 1:40-2:50pm)/ Présentations orales (9:00-09:45 et 13:40-14:50)

9:00 - Sven-Amin Lembke & Scott Levine: Timbre blending as a performance variable: investigation of the interactive relationship between performers and a sound recording engineer 

Instrument acoustics and the perceptual correlates have been the major focus in past research on timbre blending, leaving the influence of musical performance in the actual realization of blend largely unexplored. Our investigation considers the interactive relationship between two performers attempting to achieve blend and the intermediate function a sound engineer fulfills in conveying blend to an audience. Findings from two behavioral experiments are reported, which assess what timbral adjustments horn and bassoon players employ in achieving blend and how subsequently sound engineers adjust relative levels between main and spot microphones to translate blend into a stereo mix. The performances are furthermore investigated as a function of room acoustics, communication directivity, musical voicing, and leading vs. accompanying performer roles.

 

9:25 - Bryan Allen MartinThe Sounds That Made Rock: A Comparison of Classic Fender and Marshall Amplifier Designs

The presentation will discuss the differences in the circuit architectures of the 1961 Fender Vibrolux 6G11, and the 1969 Park 75 (Marshall 50 Watt) guitar amplifiers, and how their differences contribute to the distinct sound of each. The areas that will be discussed in the presentation will be the gain stage architecture and its relation to the tone controls and gain topologies within each unit; the characteristics of overload and clipping within each circuit; each amplifiers’ transition from ‘clean’ to ‘dirty’ tone; the harmonics of overload; the frequency response and curves of the distinctive tone controls; the measurement of bandwidth in nonlinear systems; and the power supply characteristics that contribute to the sound of each device.

 

1:40 - Meghan GoodchildEmotional responses to orchestral gestures

Recent empirical studies indicate that timbral changes induce strong emotional responses in listeners, but these large-scale effects have not been examined in music-theoretical research. I propose a 2x2 model of orchestral gestures that categorizes them according to whether the changes are gradual or sudden, and whether they are additive or reductive An exploratory behavioural study tested the perceptual relevance of orchestral gestures on listeners’ continuous ratings of emotional intensity. I present a new type of visualization to isolate response patterns and investigate the connection to musical and acoustical parameters. 

In order to understand further the role of timbral changes within the context of these excerpts, re-orchestrations were created (with the assistance of composers John Rea, Félix Frédéric Baril and Denys Bouliane) to test specific hypotheses. I will outline the results of an experiment comparing psychophysiological (biosensors) and behavioural (arousal and valence) measures. 

 

2:05 - Terri HronComposing Sharp Splinter: towards a personal method of documenting collaborative practice

One of my main goals for the Sharp Splinter project was that in bringing it to life and recording many of the steps, I might arrive at a possibly useful method of analyzing some of the ways collaboration influences the directions taken and choices made. The two-week workshop with three pianists and small ensemble that led to the proto-version  and performance of Maly velky svet  was the first collaboration of whose process I gathered written, audio and video documentation. In a second step, I sort and analyze these traces to see whether there is any valuable data and to make guidelines for myself not only for future documentation of the collaborative process, but also to preserve some trace of the collaboration and its impact and meaning on the piece for its later performance.

 

2:30 - Andie Sigler: Computational Analysis of Musical Structure: Polyphony and Texture

Species counterpoint, dating from the sixteenth century, systematically explores the three basic possibilities for the rhythmic relationship between two voices: coincidence (first species), inclusion (second and third species) and overlap (fourth species). In this talk I introduce a recursive structure, the "polyphony tree" to describe and explore these polyphonic relations between any number of voices.

Polyphony trees offer a systematic method for analysis of musical texture both between different corpora and within individual pieces. They are suited for use with other techniques, such as harmonic analysis. Other applications include separation of textural layers and extraction of inner rhythms, "vertical" quantization of data, and generative modelling of polyphony.    

 

Demos & Posters (10:45am-12:00pm) / Démonstrations et Posters (10:45-12:00)

DEMONSTRATIONS:

1. Preston Beebe, Zachary Hale & Ian HattwickUnsounding Objects: Control of sound synthesis mapping, extended musical performance, composition

Unsounding Objects is a series of studies composed for the SpectraSurface that examines various characteristics of the instrument. The SpectraSurface is a set of playing surfaces contained within a suitcase and equipped with contact microphones. Found objects such as bowls, pipes, or toys are placed on top of the surfaces. The sounds from the contact mics are sent to a computer where they are analyzed for their important audio features; these features are then used to drive sound synthesis. The tradition of found objects in the percussion idiom (Henry Cowell, John Cage, Lou Harrison) offers a familiar interface with unique timbral and temporal characteristics which produce interesting results in the analysis-synthesis platform of Unsounding Objects.

 

2. Marcello Giordano & Marlon Schumacher: A vibrotactile synthesis framework for haptick feedback in live-electronic music performance

In this demo we are going to show our current developments and approaches for vibrotactile feedback in live-electronic music performance.

More precisely, we are going to show a prototype implementation of a haptic synthesizer, which allows the display of any control variable within the live-electronics system as a haptic stimulus. Conceptually, we represent the link between CLEF control variables and synthesis parameters for the haptic display as a mapping within libMapper. So far this has proven to be a flexible approach, since the mapping (haptic synthesis) can be changed over time, depending on the musical context and the performer’s preferences.

Current research directions include the appropriateness of different devices for the haptic display to convey different types of information (e.g. triggers, continuous controls, etc.). The displays we have developed so far consist of: a commercially-available Android® mobile phone on which a custom-built application allows the control of the vibration motor through CLEF; an Arduino micro-controller and two (or more) vibrating disks which, leveraging tactile illusions, allow the representation of more complex information.

In the demo we are going to present a number of applications of our framework via the performance of musical examples by CIRMMT student Preston Beebe.

 

3. Ida ToninatoMixed music : analysis and interpretation

The object of this presentation is to give a general overview of some of the issues presented to an interpreter of mixed music regarding the fields of analysis and performance. We will consider mixed music under three major angles, being gesture, sound description and time and we will explain how we use those angles to build an interpretation in which the discrepancies between virtuality and reality can be unified in poetic approach of this repertoire. We will present some of the analysis tools that we have developed based on the works of Roy and Smalley.

 

POSTERS:

4. Gregory BurletRobotaba guitar tablature transcription framework

Robotaba, a web-based guitar tablature transcription framework, is presented. The framework facilitates the creation of web applications in which polyphonic transcription and guitar tablature arrangement algorithms can be embedded. Such a web application is implemented, resulting in a unified system that is capable of transcribing guitar tablature from a digital audio recording and displaying the resulting tablature in the web browser. Furthermore, two ground-truth datasets are compiled from manual transcriptions gathered from a tablature website to evaluate polyphonic transcription and guitar tablature arrangement algorithms. Using these datasets, the performance of the polyphonic transcription and guitar tablature arrangement algorithms embedded in the implemented web application are evaluated using several metrics. 

 

5. Ajith DamodaranApplication of composite material to the Chenda, an Indian percussion instrument

A composite drum shell using carbon fibre suitable for replacing hardwoods in the traditional drums was developed. An Indian drum called a Chenda was investigated and the acoustic characteristics are reported. The mechanical properties of the jackfruit wood were taken as bench mark for designing the sandwich structure. A prototype was developed by hand layup and vacuum bagging technique. An improved tuning system using turn buckles and spectra fibre ropes were implemented. Acoustic tests were performed and the frequency spectra of the wooden and composite drums are compared. Experimental modal analysis was performed to identify the mode shapes, resonant frequencies and the corresponding damping ratios of the newly designed drum. The results show that composite materials have the potential to mimic the properties of wood used in the percussion instruments.

 

6. Dalia El-ShimyReactive environment for network music performance

For a number of years, musicians in different locations have been able to perform with one another over a network as though present on the same stage. However, rather than attempt to re-create an environment for Network Music Performance (NMP) that mimics co-present performance as closely as possible, we propose focusing on providing musicians with novel controls that can help increase the level of interaction between them. To this end, we have developed a reactive environment for distributed performance that provides participants with dynamic, real-time control over several aspects of their performance, enabling them to change volume levels and experience exaggerated stereo panning. In addition, our reactive environment reinforces a feeling of a “shared space” between musicians. It differs most notably from standard ventures into the design of novel musical interfaces and installations in its reliance on user-centric methodologies borrowed from the field of Human-Computer Interaction (HCI). Not only does this research enable us to closely examine the communicative aspects of performance, but it also allows us to explore new interpretations of the network as a performance space. 

 

7. Mailis Gomes Rodrigues: Intonaspacio: A site-specific digital musical instrument

Site-specific art understands space as an important feature for the creation of the work of art. The space, or the performance space, is the frame of the work, and it is related to it not only through the subject of the work but also through its physical characteristics. Our research intends to design a digital musical instrument (DMI) that enables the access to the acoustic characteristics of the room and allow the performer to work with it creatively, creating site-specific sound works and adding space as a other composition parameter.

As an exchange student researcher at McGill, we are designing Intonaspacio, a musical interface that records the sound ambience of the room and reproduces it in a continuous loop in order to excite the resonant frequencies of this room. These, are then analyzed and combined with other DSP parameters that are controlled by a set of sensors that are included in the interface structure. This way the performer could not only record and reproduce the "sound of the room" as well as modulate it with its gestures. 

At present we are testing some mappings to perceive which parameters are best fit to be controlled by the gestures Intonaspacio allows to perform. We intend to work collaboratively with performers, musicians and composers in order to have some of their feedback in the sound generated by Intonaspacio and the mapping approach.

 

8. Jason Hockman & Jeremy VanSlyke: Score follower as a production tool for classical and contemporary music recording

Recording producers depend heavily on musical scores while recording classical music to evaluate artists’ musical performances. Their additional role of keeping track of sequences of recorded performances during recording sessions depends on detailed note-taking with separate “take sheets” that can prevent producers from giving their full attention to artists’ performances. In this project, we have explored the possibility of facilitating the management of audio recordings generated during recording sessions by automating the creation of take sheets. The alignment of audio recordings and musical scores can be achieved using sonified MIDI scores and a similarity matrix. This project is a step towards the development of music production tools that further integrate production technologies with musical scores.

 

9. Doyuen Ko & Jung Wook HongEffects of ambient light intensity on perceived brightness of sound

Many researchers in different fields have studied inter-modal perception between visual and auditory sense extensively. However, there has been little research done on perceptual effects of ambient lighting specifically in audio production environments. In this study, a task-based experiment was conducted to explore the relationship between ambient light intensity and perceived auditory brightness. Total of 10 experienced listeners were invited into a recording studio and exposed to 3 randomized ambient light conditions with different illuminance settings. The sound stimuli were chosen from the reference music samples from McGill Sound Recording program that all participants are familiar with over the years. High frequency energy of each example was attenuated by the same amount through a shelving filter. The subjects were asked to return the filter gain to the perceived zero crossing using a continuously variable encoder. The value of the gain adjustment and the completion time were monitored. The test result showed positive correlation between higher luminance levels to increased gain variance. More results will be presented in the poster session. 

 

10. Hossein MansourEnhanced simulation of the bowed cello string

Time-domain simulation of the bowed string has been the subject of many studies in recent years and as a result, many features of bowed strings have been explained qualitatively and to a certain extent quantitatively. Building on earlier simulation models, several new features have been added to make the model more realistic. In particular, a large number of body modes, both transverse polarizations of the string motion, the longitudinal vibrations of the bow hair and the effect of the coupled strings are included. Different features of the model are turned on and the classic Schelleng minimum bow-force is calculated for combinations of bow-bridge distance and different notes being played on the string. The main finding is that all features reduce the minimum bow-force to some extent. This reduction is almost frequency independent for the case of the second polarization and the longitudinal bow-hair vibration, but clearly frequency-dependent for the coupled strings case.

 

11. Daniel Steele: Spreading Soundscapes: a collaborative study on sensing in the city

This project addresses the gap between urban designers and soundscape researchers in order to facilitate better design for city sound. I consider positive soundscape approaches - like how to build an urban park that sounds appropriate for its intended activities - rather than negative, like avoiding noise through the construction of barriers. The first step in bridging the gap was establishing that better soundscape outcomes can be achieved through interdisciplinary work. Next, an approach that considers the appropriateness of soundscapes for particular activities was needed to capture the individual, context-specific needs of city users, though this approach generates many data points. Finally, it was necessary to explore representations for this vast data and knowledge, like using geographic information systems (GIS) to display qualitative data about soundscape.  In the future, automatic determination of soundscape quality will pave the way for improved tools that lower the barriers to considering sound in city design.

 

12. Francesco TordiniSaliency of everyday sounds. Learning descriptors. Setting priorities

Auditory saliency is a precursor to bottom-up attention modelling but its definition is far from perfection. This is mainly due to the lack of robust methods to gather basic data, and oversimplifications such as an assumption of monaural signals.

We introduce a newly designed paradigm that tries to capture the perceived auditory saliency of simultaneous everyday sounds in a binaural, spatial scenario. Preliminary behavioral results are presented.

The analysis that leads from subjective raw data to the perceptual saliency ranking of tested sounds is accomplished by using a new mixed-data classification and ranking algorithm that relies on response time and detection accuracy, but also on categorical data associated with the background or with the sounds.

Finally, we attempt to uncover and collect acoustical features that can explain the perceived saliency scores gathered from human subjects, with a main focus on on those describing the temporal structure of sounds. 

Keynote Address (9:45am & 3:20pm) / Conference Invitée (09:45 & 15:20)

9:45 - Perry Cook: A (not so) Brief History of Laptop Orchestras and Ensembles

This talk will romp through some of the history of humans performing with computer mediated systems, especially in ensemble contexts. Extra detail will be paid to the genesis of the Princeton Laptop Orchestra, and many other LOrks in its family tree.  Since I'm a singer, I'll talk quite a bit about the voice in this context, thinking aloud about what Choirs of the Future  (and really the immediate present) might look like. 


3:20 - Christopher Dobrian: Interactivity, Shminteractivity: In Search of the Expressive Computer

Interactive computer music and interactive digital art have become major genres of the past thirty years, and human-computer interaction (HCI) has been a major field of study in computer science for about the same amount of time. Yet the level of musical and performative sophistication of much interactive music and art often remains simplistic and unsatisfying. What are some of the major problems and obstacles confronted by designers and composers of interactive instruments and music, and what are their possible solutions? The speaker will examine that question, and will demonstrate pros and cons of some approaches used in his own work.