Computational principles and neural measures of speech processing
Time: Thursday, January 24, 2013, 11:00am - 12:00pm
Place: Speech and Hearing Center, Room C141
|Joseph Toscano (University of Illinois-Champaign)|
Research on speech perception has long sought to identify the acoustic-phonetic cues that listeners use to distinguish speech sounds. This is a particularly challenging problem since there are a number of non-phonological factors that have effects on both the cues themselves (e.g., variability between talker's voices) and listeners' ability to recognize certain cues (e.g., effects of hearing loss). A number of models have proposed that listeners recognize speech by relying on specialized mechanisms that discard information in the speech signal that does not indicate a phonological contrast. Here, I argue instead that domain-general computational principles can account for listeners' behavior. Crucially, these principles can be implemented in models as relatively simple combinations of continuous acoustic cues, allowing listeners to integrate multiple sources of information and factor out predictable variation. I present evidence for this using several approaches, including (1) event-related potential (ERP) experiments that examine cortical responses to differences in speech sounds, (2) computational work that examines how statistical learning can be used to acquire speech sound categories over development and adapt those categories in adulthood, and (3) acoustic-phonetic analyses that allow us to determine which acoustic cues are most informative for a given phonological distinction. Together, the results of these studies suggest that general mechanisms of statistical learning and cue-integration can provide useful models for understanding how listeners recognize speech in a variety of contexts.
|In category: Phonetics and phonology|
JEvents v3.0.9 Stable Copyright © 2006-2013