Personal tools

The Science and Technology of Music

Sections
Home Activities Workshops and Research Meetings Research Workshops Workshop on rhythm in speech and music

Skip to content. | Skip to navigation

Workshop on rhythm in speech and music

— filed under:

This workshop is organized by CIRMMT Research Axis 5 (Music Perception and Cognition) in collaboration with the Centre for Research in Language, Mind and Brain (CRLMB). It will take place on April 21, 2011, from 10:00am-2:30pm in A832 (New Music Building).

What
  • Research Workshop
When Apr 21, 2011
from 10:00 AM to 02:30 PM
Where A832 & A833, New Music Building, 527 Sherbrooke St. West.
Add event to calendar vCal
iCal

Registration

Registration is mandatory as seating is limited, and is on a first-come first-served basis (45 seats): Workshop on rhythm in speech and music - Registration form

Description

Until recently, comparisons of similarities and differences between speech and music were for the most part purely theoretical. Things have changed with the development of experimental paradigms for exploring the structure and nuances of speech and music production and perception. One of the most fruitful areas of comparison has been in the realm of rhythm. This workshop will present research situated within both fields, as well as comparative work between them, bringing together cognitive, developmental and cross-cultural perspectives on the production and reception of rhythm in speech and music, and demonstrating their specificities and interactions in human auditory cognition.

Guests

  • Ani Patel, Music and the Brain Program, The Neurosciences Institute, La Jolla, CA
  • Erin Hannon, Department of Psychology, University of Nevada, Las Vegas
  • Linda Polka, CRLMB, School of Communication Sciences and Disorders, McGill University
  • Suzanne Curtin, Department of Psychology, University of Calgary
  • Lawrence Zbikowski, Department of Music, University of Chicago; Fulbright Visiting Research Scholar, CIRMMT, McGill University
  • Leigh van Handel, College of Music and Program in Cognitive Science, Michigan State University
  • Godfried Toussaint, CIRMMT, School of Computer Science, McGill University; Department of Music, Harvard University
  • Anna Tirovolas, CIRMMT, Department of Psychology, McGill University

Schedule

Moderator: Stephen McAdams, CIRMMT, Schulich School of Music, McGill University

  • 9:30 - Coffee
  • 10:00 - Introduction
  • 10:05 - Ani Patel, Music and the Brain Program, The Neurosciences Institute, La Jolla, CA: Rhythm in speech and music
Rhythm is fundamental to speech and music, yet empirical comparisons between rhythmic patterns in the two domains are rare. I suggest that progress in empirical comparative research depends on a clear distinction between periodic and nonperiodic rhythms in human auditory cognition. I argue that speech and music have fundamental differences in terms of periodic rhythms, and important connections in terms of nonperiodic rhythms.
  • 10:20 - Erin Hannon, Department of Psychology, University of Nevada, Las Vegas: The ontogeny of rhythm processing in music and speech
Rhythmic similarities exist in the music and speech of a given culture, raising crucial questions about the extent to which cognitive representations of rhythm overlap in the two domains. Developmental research has the unique potential to illuminate the origins of domain-specificity and domain-generality of rhythm processing. Given that exposure to auditory input profoundly shapes the acquisition of culture-specific knowledge of both music and language, an important question is whether or not music-speech similarities predominate in child-directed input, and how such similarities might influence developing knowledge. The position will be taken that by examining the ontogeny of rhythmic representations, we may better understand the basis and function of rhythm in both domains.
  • 10:35 - Linda Polka, CRLMB, School of Communication Sciences and Disorders, McGill University and Suzanne Curtin, Department of Psychology, University of Calgary: The imprint of native language rhythm on speech segmentation
In this talk we will provide a survey of research on word segmentation in infants highlighting studies conducted with infants who are acquiring either English or French or both languages. English and French belong to different rhythmical families with English being stress-timed and French being syllable timed. Our review includes findings from native language, cross-language, and cross-dialect investigations using natural speech materials and also data obtained with infants and adults using more controlled speech materials in which statistical cues and language-appropriate stress cues are manipulated independently. The findings reveal that language experience guides segmentation along different developmental paths using different strategies, favoring stress patterns (trochees) in English perceivers and syllables in French perceivers.  We will argue that word segmentation is a language-specific skill that is strongly biased by the native language rhythm at every stage of development.  Thus, native language rhythm has an early and lasting imprint on speech segmentation.
  • 10:50 - Lawrence Zbikowski, Department of Music, University of Chicago; Fulbright Visiting Research Scholar, CIRMMT, McGill University: Rhythm in human communication
It seems quite evident that rhythmic processes can serve as a resource for both speech and music. That said, I would argue that the different functions of speech and music within human cultures lead to their drawing on this resource in different ways. In my presentation I shall sketch some of these differences, and suggest how they can inform our research into the role of rhythm in human communication.
  • 11:05 - Leigh van Handel, College of Music and Program in Cognitive Science, Michigan State University: Rhythm and meter as a compositional fingerprint?
Prior studies of the relationship between musical rhythm and speech rhythm focused on cross-language results. As a music theorist, I believe that there is much more musically meaningful information available in such studies, and I would urge researchers to consider the musical ramifications of recent developments in speech/music rhythm studies.
  • 11:20 - Godfried Toussaint, CIRMMT, School of Computer Science, McGill University; Department of Music, Harvard University: Do there exist any nontrivial features of durational rhythms that correlate with perceptual similarity?
Two general approaches to the measurement of similarity between purely durational rhythms represented as symbolic sequences are: the feature-based approach and the transformation method. In the feature-based approach the symbolic sequence is represented by a collection of its features, and similarity between two rhythms is measured by a function of the feature values. The transformation method measures similarity between two rhythms by the effortlessness with which one sequence is transformed to the other. This effortlessness is typically measured by a function of the minimum number of some elementary mutations required to carry out the transformation. The proposition is put forward that good non-trivial features of duration patterns are hard to find, and as a consequence, the transformation method is superior to the feature-based approach for predicting human judgments of durational rhythm similarity.
  • 11:35 - Anna Tirovolas, CIRMMT, Department of Psychology, McGill University: Perception of emotional expression in musical performance
In music, timing and amplitude are the principal variables a performer will explicitly vary in order to convey an expressive performance. A series of experiments conducted in our laboratory suggest that listeners are sensitive to subtle variations in timing and amplitude conveyed in piano performances. We found that variation in timing serves to convey expressivity to a greater extent than variation in amplitude. Expressivity in musical performance can be viewed as a parallel to prosody in speech. Timing varies from one musical performance to another, as does timing of successive linguistic utterances. Furthermore, it has been observed that a language’s inherent rhythm, or prosody, is associated to that particular culture’s music. The position we take here is that timing (or rhythm) can be considered a common prosodic feature of language and music.
  • 11:50 - Buffet Lunch
  • 13:00 - Structured discussion of the emerging issues

Document Actions