Personal tools

The Science and Technology of Music

Home Activities Workshops and Research Meetings Research Workshops Workshop on Three-dimensional Immersive Audio

Skip to content. | Skip to navigation

Workshop on Three-dimensional Immersive Audio

— filed under:

This workshop is organized by Research Axis 1 -Instruments, devices and systems- and will be held in A832, on 8th floor of the New Music Building of McGill University. Registration is required.

  • Research Workshop
When Apr 20, 2016
from 09:00 AM to 02:00 PM
Where A832, New Music Building, 527 Sherbrooke St. West.
Add event to calendar vCal

Registration /Inscription

Space is limited; register now to ensure your seat at this special event!

  Registration link: Three-dimensional Immersive Audio


Workshop Program Details


Each session will be followed by a 10-15mins discussion. 


      Fresh coffee will be ready at 8.45am!


  • 9:00-9:30: “Three-Dimensional Music Recording” - Will Howie and Bryan Martin, Ph.D Students & Course Lecturers, Sound Recording Area, Dept. of Music Research, McGill University

This presentation will summarize the current research into 3D audio being conducted by Will Howie and Bryan Martin, PhD candidates in The Graduate Program in Sound Recording, McGill University (Canada). Concepts, Techniques, and Aesthetics for three-dimensional music recording will be discussed. The presentation will feature playback of a number of audio examples, recorded and mixed for NHK's 22.2 multichannel sound format, spanning classical, rock/pop, and jazz genres.

  • 9:45 – 10:15: "Surrounding microphone arrays for sound radiation measurements" - Prof. Dr. Michael Vorländer, Director, Institute of Technical Acoustics, RWTH Aachen University, Germany

 The directivities of musical instruments are often neglected in auralization, both when working with measured or simulated acoustical environments. This is due to the complex nature of both the radiation of musical instruments and the post-processing involved in this task. To conduct such measurements, a large surrounding spherical microphone array is required that allows to fully encompass the musician in an anechoic chamber. In practice, however, a precise alignment of the source to the physical center of the array is impossible. With the help of re-alignment algorithms it is possible to virtually shift the sound source to the center of the array to allow a more accurate description in the spherical harmonic domain. In this workshop different aspects of measuring and post-processing strategies are presented with the goal to provide directivity patterns of musical instruments for application in both measurement and simulation.

  • 10:30 – 11:00: “A 3D spatial approach to music composition and sonic interaction, using the Spatial Audio Toolkit for Immersive Environments (SATIE), developed at the Society for Art and Technology (La SAT)” - Zack Settel, Resident Composer at The Society for Arts and Technology (La Sat), Montreal, Québec

 Over the past 10 years, composer and performer Zack Settel has been using 3D audio scenes for composing and performing music. SATIE, the latest environment for authoring and rendering audio scenes,  builds on previous work of the past, such as Soundscape, developed in an CCA/NSERC collaboration with McGill's Center for Intelligent Machines, followed by AudioTwist and SpatOSC, both developed at the SAT.  Unlike the previous environments, SATIE provides a highly optimized implementation for rendering dense 3D audio scenes, consisting of high-quality synthesis and physical modelled sound sources; SATIE is able to render audio for high-density loudspeaker arrays, such as the SAT's 12-meter high dome-shaped Satosphere, where SATIE's 31 output channels are sent to dome's 157 surrounding loudspeakers.

 The discussion will feature Settel's novel approach to composition and performance, followed by the underlying techniques used to author and render the music. Specifically, Settel will present the SATIE audio rendering engine, based in Supercollider, and the rendering translator, Satie4Unity, that connects SATIE to the audiovisual scene authoring and runtime environment, Unity3D.

 Included will be musical examples with very dense audio scenes, consisting of hundreds of independently moving sound sources, rendered in real time.  Recent work with "musical particle" systems will be shown as well.


Coffee Break    11:10 – 11:30


  • 11:30 – 11:50 : “3D Audio Reproduction Using Frontal Projection Headphones”  - Kaushik Sunder, Dr.,  Post-Doctoral Researcher, Sound Recording Area, Dept. of Music Research, McGill University

 3D Audio can provide a veridical display of auditory immersiveness. The art of recording and playback of 3D Audio is not new, however, this has gained tremendous popularity in the past decade. Headphones provide a private listening space and due to their portability, have become the most convenient mode for 3D Audio playback.  3D sound can be synthesized by using special spatial transfer functions known as Head-related transfer functions (HRTFs), which are highly idiosyncratic due to their dependence on the listener’s anthropometrical features (Head, Shoulder, Torso, and Pinna). However, headphones playback of 3D Audio is marred by several practical challenges.  Individualized HRTFs are important to experience an accurate and immersive perception of 3D sound. Use of non-individualized HRTFs to synthesize 3D audio leads to other localization imperfections like front-back confusions, up-down reversals, and in-head localization. In this talk, techniques to model the range-dependent individualized HRTFs that does not require any individualized acoustical measurements will be presented.  The new technique tackles the problem using a specially designed frontal-projection headphones that emulate the listener’s pinna spectral features on the fly.  


  • 12:00 – 12:20 : "Lateral listener movement on the horizontal plane: sensing motion through binaural simulation" - Matthew Boerum, Ph.D Student & Course Lecturer, Sound Recording Area, Dept. of Music Research, McGill University

 An experiment was conducted to better understand first-person motion as perceived by a listener when moving between two virtual sound sources in an auditory virtual environment (AVE). It was hypothesized that audio simulations using binaural cross-fading between two separate sound source locations could represent a sensation of motion for the listener that is equivalent to real world motion. To test the hypothesis, a motion apparatus was designed to move a head and torso simulator (HATS) between two matched loudspeaker locations while recording various stimulus signals (music, pink noise, and speech) within a semi-anechoic chamber.  Synchronized simulations were then created and referenced to video. In two separate, double blind MUSHRA-style listening tests (with and without visual reference), 61 trained binaural listeners evaluated the sensation of motion among real and simulated conditions. Results showed that the listeners rated the simulation as presenting the greatest sensation of motion among all test conditions.  Recently, the experiment was repeated, however, within a highly reverberant space.  The results of this experiment are pending but will also be discussed.


  • 12:30 – 12:55: “Discovering a physical parameter that co-varies with enhanced immersiveness of multichannel-reproduced music with height.” - Sungyoung Kim, Dr., Assistant Professor, Rochester Institute of Technology, Rochester, NY, USA.

 Height channels add enhanced immersiveness and presence to a conventional ITU-R BS 775 multichannel sound field. Previous study results showed that a configuration of four height loudspeakers significantly changed perceived immersiveness. Subsequent study compared Japanese and North American listeners and found that the Japanese group chose configurations giving them “frontal” and “narrow” auditory images while the other group chose ones with “spacious” and “surrounding” percepts. To determine physical parameters accounting for the listeners’ judgment, loudspeaker-to-listener transfer functions at multiple positions in a room were measured and simulated. The results show that the listeners’ hedonic judgments were correlated with the Front-and-Overhead Energy Ratio (FOER) (r = -0.79 for the North American group and r = -0.54 for the Japanese group). This physical property is expected to assist future users in utilizing height channels in practical recording projects.


 Lunch Break     1:15 – 2:00    (room A-832/833)


  • 3D Music Listening Session in Studio 22      (-2 level)    2:00 – 3:30 pm

End of the Workshop


    Document Actions