Workshop: Designing auditory experiences for an always-on Mixed Reality system

Workshop: Designing auditory experiences for an always-on Mixed Reality system

An event organized in collaboration with Research Axis 1 (Instruments, devices, and systems).

Preceding David Lindlbauer's Distinguished Lecture on October 3, this workshop will bring together researchers in audio and mixed reality.

Registration & Call for Presentations

This event is free and open to the general public with registration; lunch will be provided preceding the workshop. 

To register as an attendee or to submit a presentation proposal, please fill out the following form before September 28, 3:00 p.m

We welcome you to share this event via Facebook


We are moving towards a future where smartphones and desktop computing devices are complemented and potentially replaced by always-on head-mounted Mixed Reality devices. Such future systems will offer the ability to communicate with users by displaying nearly arbitrary contents. Besides visual content, however, auditory and multi-modal content will be key enabling factors for developing experiences that are beneficial for users. In this workshop, we hope to gather insights, knowledge and ideas into key considerations and novel concepts for future multi-modal Mixed Reality systems.


David Lindlbauer is an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University where he leads the Augmented Perception Lab. His research focuses on creating and studying enabling technologies and computational approaches for adaptive user interfaces to increase the usability of AR and VR interfaces, with applications in casual interaction, productivity, health and robotics. Prior to joining CMU, David received his PhD from TU Berlin, advised by Prof. Marc Alexa, and was a postdoctoral researcher at ETH Zurich. He has published more than 20 scientific papers at premier venues in Human-Computer Interaction such as ACM CHI and ACM UIST and was awarded the ETH Postdoctoral Fellowship in 2018. His work has attracted media attention in outlets such as MIT Technology Review, Fast Company Design, and Shiropen Japan.


12:00 Lunch, informal mingling
12:20 Welcome/Introduction (Jeremy)
12:25 David Lindlbauer: Designing auditory experiences for an always-on Mixed Reality system
13:10 Uro Pierrick: Animate
13:35 Nicolas Bouillot: Auditory Spatial Augmented Reality for Group Immersion
14:00 Naeem Komeilipoor: Spatial audio in mixed reality devices
14:25 Jeff Blum: Pitfalls in acceptance of always-on Augmented and Mixed Reality systems
14:50 David Lindlbauer: reflections on previous presentations
15:00 Breakout rounds/ discussion re synthesising information for potential future interactive scenarios

Uro Pierrick: Animate

Last month, the show Animate, a collaboration between Concordia and McGill Universities partly funded by the FRQSC premiered for over 400 people in Weimar, Germany. It addresses the eventuality of climate catastrophe by merging performance, radio play, and an immersive audiovisual installation designed for 12 participants/spectators wearing VR headsets together. The design of the show explored spatial sound, haptic feedback, and biosensing as ways to engage the audience with more than the obviously visual aspect of XR, as it interrogates the audience on what is truly virtual. This presentation will elaborate on the challenges and insights met during this year-long design, implementation and execution.

Nicolas Bouillot, research codirector, Société des arts technologiques: Auditory Spatial Augmented Reality for Group Immersion

Our spatial AR approach for mixed reality system will be discussed and illustrated through projects conducted by the [SAT] Metalab. Our works involve spatial audio devices and methods, telepresence and live acoustical simulation for large immersive space dedicated to group immersion.

Naeem Komeilipoor, Founder & CTO of AAVAA

With spatial audio technology becoming more widespread in mixed reality devices, users increasingly expect immersive “surround sound” experiences. But, for these to be fully immersive, such systems should not only incorporate head tracking, but should mimic the full human attention system, which is the resultant combination of head orientation, gaze direction, and auditory focus. AAVAA solves this problem, enabling superhuman hearing for extended reality by understanding where a user is facing, looking, and even listening, just by decoding their brain and bio-signals from unobtrusive sensors located in and around the ears and across the head.

Jeffrey Blum: Pitfalls in acceptance of always-on Augmented and Mixed Reality systems

Wearing a device that continuously provides multimodal ambient information is fundamentally different than using a phone or laptop. User context and attention become much more critical, and issues ranging from habituation to ongoing signals, to evaluating social acceptability, weigh heavily on designing, testing, and deploying always-on Augmented or Mixed Reality systems. Drawing from experiences with Autour, an audio augmented reality application for people who are blind or low-vision, and MIMIC, an always-on haptic connection between two partners, this presentation covers practical issues in making always-on AR and MR applications enticing to potential users.