Personal tools

The Science and Technology of Music

Home Activities Distinguished Lectures William Hartmann: "Sound source localization: How the auditory system copes with confusing data"

Skip to content. | Skip to navigation

William Hartmann: "Sound source localization: How the auditory system copes with confusing data"

— filed under:

A Distinguished Lecture by a guest from Michigan State University, USA.

  • Distinguished Lecture
When Oct 18, 2018
from 05:00 PM to 06:30 PM
Where Tanna Schulich Hall, 527 Sherbrooke St W
Add event to calendar vCal
The lecture will take place in TANNA SCHULICH HALL, followed by a wine and cheese reception in room A832-833 (8th floor of the Elizabeth Wirth New Music Building).   


Objects that are seen with the eyes can be localized in space because of a corresponding map on the retina. Objects that are heard can be localized only by neural computation. Auditory localization computation begins with data determined by the listener's anatomy. For sources in the horizontal plane these data are the interaural differences in sound arrival time and intensity. These initial data can be intrinsically ambiguous and are further easily corrupted by standing waves in a room environment. Somehow, the listener needs to cope. Virtual reality experiments using on-line ear-canal recordings to guide transaural synthesis show how the central auditory system tries to cope with time cues and intensity cues that point in opposite directions. Simultaneously also monitoring head location and orientation reveals listener strategies for integrating information over time and space. Images that are stable in perceived world coordinates despite large head rotations assume special influence in localization. Psychoacoustical and physiological explorations of the neural computations themselves reveal parallel computation processes, with weightings that are strongly dependent on frequency and on temporal structure, apparently making near optimum use of the available interaural data.


William Morris Hartmann is a professor of physics at Michigan State University and a psychoacoustician. He is best known for the application of signal rocessing mathematics to human auditory perception -- exemplified by his textbook, Signals, Sound, and Sensation. His experiments have emphasized pitch perception (discovering binaural edge pitch and harmonic unmasking) and spatial hearing (discovering the role of the bright spot in azimuthal localization and the negative level effect in vertical localization). He received the 2001 Helmholtz-Rayleigh Award for research from the Acoustical Society of America. He has served as the Society's president and received the Gold Medal in 2017. He wrote the elementary textbook, Principles of Musical Acoustics and has taught musical acoustics to thousands of undergraduates over the course of more than 40 years. For fun, he builds and plays analog synthesizers.

Document Actions