An afternoon of seminars

With Jeremie Voix, ETS; Cléo Palacio-Quintin; Jonathan Sterne, McGill University; Tiago Falk, INRS; Rachelle Chiasson-Taylor, LAC; and Cory McKay, Marianopolis College.

  • 14:00-14:30 - Jeremie Voix, École de technologie supérieure: Towards a “bionic ear”

Jérémie Voix is Associate Professor at École de technologie supérieure (ÉTS) and holder of the Sonomax-ÉTS Industrial Research Chair in In-Ear Technologies. Together with his team of researchers and students, he focuses on the development of various technologies designed to complement the human ear, from “intelligent” protection against extreme noise to hearing support and inset hearing diagnostics to the integration of advanced inter-individual communication systems. More fundamental research is also planned, particularly on the micro-harvesting of electrical energy using kinetic or thermodynamic processes integrated within a miniaturized in-ear device, in efforts to address future problems with regard to autonomy. One recent development, the Auditory Research Platform, will be presented to the CIRMMT community who may take benefit from its versatility and portability for immersive in-ear audio processing applications.

 

  • 14:30-15:00 - Cléo Palacio-Quintin: Synesthesia 4 : Chlorophylle : Composition pour hyper-flûte et vidéo interactif

À travers la présentation du processus de composition de cette œuvre, Cléo Palacio-Quintin introduira les différentes facettes de son travail de recherche-création. Cette composition est élaborée à partir d’un enregistrement d’un poème récité. Une analyse spectrale de cette voix a servi de canevas de base pour développer toute la structure et les matériaux musicaux de l’œuvre. L’hyper-flûte, équipée de capteurs, a permis une analyse des données gestuelles qui a guidé l’élaboration des aspects interactifs de l’œuvre. Du traitement de signal en direct est utilisé non seulement pour l’audio, mais également pour interagir avec les images vidéo, dont les apparitions, le mixage et le traitement sont contrôlés par l’hyper-flûte.

**Please note this talk will be bilingual. 

 

  • 15:00-15:30 - Jonathan Sterne, McGill University: Histories of sound and sonic futures

How do we understand sound technologies as historical, cultural and philosophical artifacts?  How do ideas about people, power and music get "baked into" new sound technologies?  This talk will introduce audience members to my work in the history and philosophy of sound technologies, and also to the broader field of sound studies in the humanities and social sciences (including some of its guiding assumptions and methods).  I will conclude with a discussion of a new work on signal processing and the future of instruments, and some possible overlaps among humanistic, scientific and engineering concerns. 

 

  • 15:30-15:45 - BREAK

 

  • 15:45-16:15 - Tiago Falk, Institut National de la Recherche Scientifique: Research at the Multimedia/Multimodal Signal Analysis and Enhancement (MuSAE) Laboratory: From blind room acoustics characterization to multimedia quality-of-experience (QoE) perception
The Multimedia/Multimodal Signal Analysis and Enhancement (MuSAE) Laboratory was recently established at the Institut National de la Recherche Scientifique (INRS-EMT), University of Quebec. The Lab features state-of-the-art neurophysiological signal monitoring tools for the development of human-inspired technologies across three themes: multimedia, health, and human-machine interaction. In this talk, I will focus on two main projects related to the multimedia stream, which are closely aligned with two CIRMMT Research Axes. 
 
The first project (closely aligned with the ‘Sound modeling, acoustics and signal processing’ Research Axis) deals with the development of auditory-inspired objective metrics for automated room acoustics characterization from speech and non-speech sounds (e.g., blind estimation of reverberation time, direct-to-reverberant ratio, clarity), as well as for quality and intelligibility assessment. The developed tools have been shown to be highly correlated with subjective ratings from both normal- and hearing-impaired listeners.
 
The second project (closely aligned with the ‘Music Perception and Cognition’ Research Axis) deals with the development of Quality-of-Experience (QoE) models. QoE is defined as the “degree of delight or annoyance of the user of an application, resulting from the fulfillment of his/her expectations in light of the user’s personality and current mental state” and is driven by three ‘influence’ factors: system (e.g., technological factors), context (e.g., environment), and human (e.g., emotional state, attention). Our main goal is in shedding light on the effects of human subjective factors on overall QoE perception. To this end, we have used electroencephalography (EEG), near-infrared spectroscopy (NIRS), and other peripheral autonomic nervous system signals to objectively quantify human behavioural characteristics (e.g., affective states, attention) and their effects on QoE perception of i) music clips, ii) synthetic speech, and iii) reverberant speech.  
 

 

  • 16:15-16:45 - Rachelle Chiasson-Taylor, Library and Archives Canada: TBA
At Library and Archives Canada, a federal institution mandated to collect and preserve the holdings of creators of national significance, library and archival practices have always closely cohabitated. And just as differences of approach do, in spite of this, exist between the two record-keeping disciplines, there are  dichotomous ways of looking at digital records, the most obvious being  digitized vs. born- digital documents. For the music content expert, multifarious levels of separation also exist between types of born-digital music records.
To date, no typology of digital music compositions and performances has been devised to assist archivists with the processing of born-digital music acquisitions. The aim of this paper is to provide concrete examples of the procedures that currently are applied to these types of records, and to comment on trends and issues that inevitably arise with respect to their authenticity, sustainability, material value, and discoverability. 
 

  • 16:45-17:15 - Cory McKay, Marianopolis College: Applying music information retrieval techniques to audio production education. 
This talk will introduce the new jProductionCritic software, which is designed to help students learning sound recording and production to “proofread” their mixes and to assist teachers in the grading of production assignments. The underlying music information retrieval techniques used by the software will be briefly discussed, and opportunities for improving and expanding jProductionCritic’s functionality will be emphasized. The software will be placed in the context of the overall jMIR research software framework.