Aging and Speech Communication
 
Overview
Conference Program
Registration
Call for poster submissions
Poster Abstracts
Student Scholarships
Travel
Local Arrangements
Hotel
Meals
Bloomington
Conference Advisory Committee

 

IU Library and arboretumResearch Conference, October 7-10, 2007, Indiana University, Bloomington


Poster Abstracts
PDFS ADDED

POSTER SESSION MONDAY OCTOBER 10 (Posters # 1-34; Frangipani Room, IMU)
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34

POSTER SESSION TUESDAY OCTOBER 11 (Posters 35-64; Frangipani Room, IMU)
35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64

Day: Monday     Poster Number: 1
Title: Benefits of Speech-Perception Training for Older Adults with Hearing Loss

Abstract:
As part of an ongoing study of neural changes, listening effort, and communication benefits of speech-perception training using neuroimaging and eye-tracking outcomes, a word-based training program was evaluated for older adults with hearing loss. Features of the training program (as described in Humes et al., 2009) are (1) emphasis on commonly used, meaningful words in isolation and in phrases; (2) use of multiple talkers; (3) words presented in speech-like fluctuating noise (ICRA); (4) speech and noise spectrally shaped for each subject to assure audibility and simulate aided listening; (5) closed-set identification task under computer control, and (6) auditory and orthographic feedback following incorrect responses. Subjects were assigned to one of two groups, which were balanced for age, gender, and hearing loss. One group received a protocol of 8-12 weeks (24 sessions) of speech-perception training, while the control group received no intervention other than a weekly phone call during the 8-12 week training period. For all subjects, open-set baseline measures of word and sentence recognition included CID Everyday Sentences, Veterans Affairs Sentence Test (VAST), frequent phrases, and 200 frequent words from the speech-perception training corpus. A neuropsychological battery was administered and included measures of general intelligence, attention and executive function, memory, language and phonological processing, and orthographic processing. To date, mean recognition of 400 frequent words improved by ~15%, as measured from the first to the last training session. Mean improvement between baseline and post-training sessions ranged from ~5% for CID Sentences to ~25% for 200 frequent words, whereas little to no improvement was measured in a smaller number of subjects who received no training. Analyses will focus on the extent to which differences between training and control groups are influenced by individual differences in demographic, auditory, and cognitive factors. [Work supported in part by the Deafness Research Foundation Centurion Clinical Research Award, Medical University of South Carolina, and Indiana University]

Presenter: Jayne Ahlstrom, Medical University of SC
Co-Authors: Jayne Ahlstrom; Mark Eckert; Larry Humes; Stephanie Cute; Stefanie Kuchinsky; Judy Dubno

Back to Top >


Day: Monday     Poster Number: 2
Title: Training Improves Neural Timing in Older Adults

Abstract:
Older adults often express concern regarding decreased ability to hear in background noise. Age-related hearing loss is exacerbated by changes in auditory processing and cognitive skills, including neural timing delays and memory deficits. Although hearing aids improve audibility, they often fail to address the primary concern of older adults - listening in noise - because amplification algorithms do not compensate for neural and cognitive declines. Accurate perception of fast-changing consonant-vowel transitions, requiring precise neural timing, is important for understanding speech in background noise. We therefore asked whether cognitive training designed to improve recognition of consonant-vowel transitions would result in improvements in behavioral and neural responses to speech in noise. Participants include older adults, ages 60-79, assigned to one of three groups. Experimental and active control groups receive 40 hours of computer-based in-home training, either Brain Fitness auditory-based cognitive training from Posit Science (experimental) or general interest educational training (active control). The third group (passive control) receives no intervention. Pre- and post-training assessments include behavioral (audiometry, speech-in-noise measures, cognitive testing) and neurophysiologic measures (brainstem responses to speech in quiet and in noise). Preliminary data indicate significant improvements in speech-in-noise performance and short-term memory. Biologically, subcortical neural responses to speech in noise are earlier and sharper in the post-training experimental group. No significant changes are noted in the active or passive control groups. Our results demonstrate the efficacy of computer-based auditory training for management of listening-in-noise difficulties in older adults. Supported by NIH-NIDCD R01-DC010016

Presenter: Samira Anderson, Northwestern University
Co-Authors: Nina Kraus

Back to Top >


Day: Monday     Poster Number: 3
Title: Do younger adults benefit more than older adults from spatial separation when listening to conversations in a noisy environment?

Abstract:
Age related deterioration in one's ability to comprehend speech plays a primary role in the difficulties many older adults experience when communicating, especially in a multi-talker auditory scene, which increases the complexity of both the perceptual and cognitive processes required for comprehension. These age-related difficulties could reflect age-related declines in the auditory, cognitive, and/or linguistic processes that support speech comprehension. In two studies we asked younger and older participants to listen to two person (study 1) or three person (study 2) conversations played against a babble background noise and to answer questions regarding their content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. In addition, in study 1 the precedence effect was used to create a perceived separation in order to eliminate the contribution to comprehension of the interaural level differences that accompany real spatial separation. The results of both studies show that once the SNRs are adjusted to equate for individual differences in hearing, no significant differences in speech comprehension were found between younger and older participants. In addition, the results also suggest that older adults benefit less from interaural level differences that are present with real spatial separation.

Presenter: Meital Avivi- Reich, University of Toronto (Human Communication Lab)
Co-Authors: Meredyth Daneman; Bruce Schneider

Back to Top >


Day: Monday     Poster Number: 4
Title: Effects of the volatile anesthetic isoflurane compared to the sedative Domitor on envelope following responses in young and aged animals

Abstract:
GABAergic inhibition has been shown to shape sound-evoked responses of neurons throughout the central auditory pathway. Thus, loss of GABAergic inhibition with age may be a contributing factor to many auditory processing deficits observed in the central auditory system. Many commonly used anesthetics, including isoflurane, act at least in part via modulation of the GABAergic system. Therefore, anesthetics that act on the GABAergic system may differentially affect young individuals compared to aged individuals whose GABAergic inhibition has declined. This study aims to test whether use of volatile anesthetics such as isoflurane alter auditory evoked responses in two different comparisons: 1) compared to the use of a sedative (medetomidine, also known as Domitor) which has been shown to preserve auditory responses up to the level of the inferior colliculus, and 2) whether the effect of isoflurane is less on aged compared to young animals. Auditory brainstem responses and envelope following responses (EFRs) were assessed using sinusoidally amplitude modulated (SAM) tones, sinusoidal frequency modulation (SFM) tones, and sounds with ramped or damped envelopes presented to aged (21-23 months old) and young (3-5 months old) Fischer-344 rats. ABR thresholds were 5-10 dB lower using Domitor in both young and aged animals. There was better phase locking to modulation frequencies at or below 45Hz AM using Domitor, when compared to isoflurane in both young and aged animals. The amplitudes of the EFRs were also typically larger with Domitor at all modulation frequencies.. Differential auditory processing with age due to isoflurane when compared to Domitor and implications for clinical testing of pharmacological manipulations are discussed. Work supported in part by the American Federation for Aging Research.

Presenter: Edward Bartlett, Purdue University
Co-Authors: Christopher Evenson; Aravindakshan Parthasarathy

Back to Top >


Day: Monday     Poster Number: 5
Title: Age-related differences and similarities in the time-line for separating speech from an auditory masker

Abstract:
Older adults experience a greater degree of difficulty than younger adults listening to a target talker when other people are talking. A contributing factor to this difficulty may be that it takes older adults a longer period of time to separate target speech from competing sound sources. There is reason to expect that the amount of time it takes older adults will depend on the complexity of the auditory scene. This study investigates age differences in the time-line for segregating target speech from either a speech spectrum noise or a babble background (12 people talking simultaneously) by varying the delay between masker onset and speech onset. Results indicate that older adults are as fast as younger adults at separating speech from a noise masker, but are not as capable as younger adults of taking advantage of the delayed onset of the speech target when the masker is babble.

Presenter: Boaz Ben-David, Toronto Rehab and University of Toronto
Co-Authors: Vania Tse; Bruce Schenider

Back to Top >


Day: Monday     Poster Number: 6
Title: Bimodal presentation of spoken speech and short-term memory in older adults

Abstract:
Visible cues from a talker's face have been shown to improve spoken speech understanding, especially in noise. It is plausible that access to bimodal cues may also enhance cognitive processes, e.g., short-term memory (STM) function. This cross-sectional study in progress includes healthy adults (age 60 and over), young adults (ages 26-30), and children (ages 10-11). All participants have normal hearing and good speech understanding in quiet and noise. Stimuli are video clips of bisyllabic words organized into lists of unpredictable length, presented in quiet and noise (SNR 5 dB), under auditory-visual and auditory-only conditions. Participants are instructed to attend to the lists, and repeat back the final 4 words heard. Recall accuracy of the final 4 words from the running memory task are examined. It is hypothesized that the last word heard will have the highest recall accuracy, while the fourth word back will have the lowest. Recall accuracy is hypothesized to be higher for auditory-visual stimuli. In noise and/or in the case of diminished sensory function, a listener may exert more perceptual effort, reducing resources available for STM. Visual cues may reduce perceptual effort, making more cognitive resources available for STM. Early findings, from the study in progress, demonstrated that seeing the talker's face while listening to speech facilitated word recall for some individuals. Greater understanding of how bimodal presentation enhances cognitive processes may inform multi-component models of STM and guide technology for listeners with hearing loss.

Presenter: Lynn Bielski, University of Illinois at Urbana-Champaign
Co-Authors: Charissa Lansing

Back to Top >


Day: Monday     Poster Number: 7
Title: Effects of Health on Sentence Comprehension in Aging

Abstract:
Despite the fact that language is fundamental to cognitive function, it rarely receives systematic consideration in studies of the effects of health on cognition. Hypertension and diabetes are known to adversely affect multiple cognitive domains among older adults, including speed, flexibility, and memory. Our study addresses the research gap regarding the effects of health on language in aging, exploring whether hypertension and diabetes affect sentence comprehension. We build on earlier work in our lab demonstrating that hypertension, but not diabetes, impairs lexical retrieval in older adults. We tested 295 healthy, non-demented adults, aged 55-84. For both diabetes and hypertension, participants were grouped according to whether these conditions were present/absent and controlled/uncontrolled. Two sentence comprehension tasks--Embedded Sentences and Multiple Negatives--were administered. Participants' performance was scored for both accuracy and reaction time. Presence of either hypertension or diabetes produced significant impairment on the Embedded Sentences task (β=6.639, p=0.034). Presence of hypertension (but not of diabetes) produced significant impairment on the Multiple Negatives task (β=-2.07, p=0.009). Individuals with uncontrolled diabetes had significantly poorer accuracy on Multiple Negatives (β= 3.664, p=0.0018). The presence of hypertension in older people is, thus, linked to impaired sentence comprehension. Diabetes, on the other hand, shows a more complicated relationship to sentence comprehension, especially when uncontrolled. These findings raise two fundamental questions: do health factors affect other aspects of language in the elderly, and, if so, what are the neural or cognitive mechanisms by which these effects operate?

Presenter: Dalia Cahana-Amitay, Boston University
Co-Authors: Martin Albert; Avron Spiro III; Emmanuel Ojo; Jesse Sayers; Loraine Obler

Back to Top >


Day: Monday     Poster Number: 8
Title: Electrophysiological indices of hearing related phonological processing deficit

Abstract:
Post-lingually acquired hearing impairment is associated with decreasing phonological processing abilities. In this study we used Event-Related Potentials (ERP) to examine phonological processing in 24 older adults (mean age = 63) with acquired moderate-to-severe hearing impairment (HI, mean duration of hearing loss = 37 yrs) and 24 age-matched normal hearers (NH). Participants performed a visual rhyme judgment task in four conditions. Word pairs consisted of rhymes (R ) and non-rhymes (R-) that were orthographically similar (O ) or dissimilar (O-). Stimuli were written and briefly presented one word at a time. Audiograms, SRT's, measures of working memory capacity (WMC) and linguistic abilities were collected for all participants. Analyzes of behavioural data from 42 participants (21 HI) on the rhyme judgment task show that NH overall outperformed HI in the conditions requiring higher reliance on phonological processing skills (R O-, R-O ). This was expected. However, HI with higher WMC performed on a par with NH. By contrast, HI with lower WMC performed significantly lower than the others. Thus, performance was highly related to WMC in HI but not in NH. In this poster we will supplement these results with ERP-data from a larger set of participants. We anticipate HI to show a greater N400 effect in the R O- and R-O conditions reflecting additional processing. Further we expect group differences in later ERP components indicating differential WMC engagement. We will also examine whether ERPs correlate to speech perception in noise. Results will be reported and discussed.

Presenter: Elisabet Classon, Linkoping university
Co-Authors: Mary Rudner; Mikael Johansson; Jerker Rönnberg

Back to Top >


Day: Monday     Poster Number: 9
Title: Effect of age and type of visual distracter on AV speech perception

Abstract:
The addition of visual input of speech provides a boost of information to a listener and can result in improved speech perception in noise (Grant, Walden, & Seitz, 1998). In everyday situations, however, the addition of background noise often creates both visual and auditory distractions, which potentially can act as sources of masking for both modalities. Although auditory masking has been shown to reduce speech perception ability, the effect of a visual distracter on speech perception is unknown. The goals of this study were: 1) to determine the impact of visual distracters encountered in daily life on an auditory-visual speech perception task in noise, and 2) to examine whether listener age differentially affected performance. It was predicted that while both younger and older adults would exhibit decreased speech perception ability in the presence of a visual distracter, older adults would perform more poorly. Older adults may experience greater difficulty than younger adults in inhibiting visual distracters based on the theory that inhibitory processes are reduced with increased age (Hasher & Zacks, 1988). Younger and older normal hearing listeners performed a sentence recognition task in the presence of a single competing auditory distracter in both auditory only and auditory-visual conditions using TVM sentences (Helfer & Freyman, 2009). Visual distracters included competing faces, text, and video. Results suggest that competing visual distracters impair speech perception performance, and that this change varies depending on type of visual distracter and age.

Presenter: Julie Cohen, University of Maryland, College Park
Co-Authors: Sandra Gordon-Salant

Back to Top >


Day: Monday     Poster Number: 10
Title: Neurophysiologic investigation of auditory what and where pathways in young and middle aged adults with normal hearing

Abstract:
When a sound is heard, individuals intrinsically attempt to identify the sound and its location. Previous research has documented automatic and parallel processing of the 'what' and 'where' characteristics of sound in separate pathways. This study involved the neurophysiologic investigation of these auditory dual pathways as a function of working memory and age. Event-related potentials were measured in young and middle-aged adults in a detection and discrimination task for both pitch identification ('what' pathway) and location ('where' pathway). Stimuli consisted of two five-tone complex stimuli (fo 230 Hz or 340 Hz) presented via right and left speakers. During the 'what' condition participants attended to the pitch of the sound regardless of location. During the 'where' condition participants attended to the location of the sound regardless of pitch. In experiment 1 (detection), participants evaluated each individual stimulus based on pitch for the 'what' condition or location for the 'where' condition. In experiment 2 (discrimination), participants compared the current stimulus as same or different (pitch for the 'what' condition or location for the 'where' condition) as the previous stimulus. Stimuli were identical for both the 'what' and 'where' conditions in all experiments. Comparison of ERP components and reaction times between experiments allowed for evaluation of working memory between the two age groups and the 'what' and 'where' conditions. Results showed differences in cortical processing of the dual pathways as a function of age, task instruction and memory load. Implications for aging research will be discussed.

Presenter: Ann Marie De Pierro, Montclair State University
Co-Authors: Ilse Wambacq; Martha Ann Ellis; Joan Besing

Back to Top >


Day: Monday     Poster Number: 11
Title: The effect of aging on speech perception in noise: comparison between normal hearing and cochlear implant listeners

Abstract:
The purpose of the present study was to investigate the effect of aging on temporal processing and speech perception in noise for normal-hearing (NH) and cochlear-implant (CI) listeners. Previous studies have shown age-related changes in auditory processing (Dubno et al, 2002; Gordon-Salant et al., 2006; Pichora-Fuller et al., 2006). Little has been known on such processes for elderly CI listeners. Since cochlear implants have been successfully used to restore usable auditory functions to postlingually deafened adults, a greater number of elderly adults received cochlear implantation. Therefore, it is important to understand how much benefits they could have from CI in various listening conditions. Four groups of listeners participated in the present study: young and elderly (> 60 years old) NH and CI listeners. Temporal modulation transfer function (TMTF) and IEEE sentence recognitions in quiet and modulated noise were measured for individual listener. It is hypothesized that there are significant aging effects on TMTF and speech perception in modulated noise, and that age-related changes are greater on CI listeners than NH listeners. Comprehensive data analyses and future study plans will be discussed on the poster.

Presenter: Su-Hyun Jin, University of Texas at Austin
Co-Authors: Chang Liu; Doug Sladen

Back to Top >


Day: Monday     Poster Number: 12
Title: Auditory sensitivity to temporal fine structure across the adult lifespan

Abstract:
This poster presents the first results of a large-scale study investigating age-related changes in the ability to process the temporal fine structure (TFS). Listeners aged 18-90 yrs were recruited. All listeners had unilaterally (UNH) or bilaterally normal hearing (BNH), defined as audiometric thresholds of ¡Ü 20 dB HL between 125 and 4000 Hz. Sensitivity to TFS was assessed using two psychophysical tests. In the first test, UNH and BNH listeners discriminated a monaurally presented harmonic tone complex from an inharmonic tone complex, obtained by shifting all components of the first complex upwards in frequency. Fundamental frequencies (F0) of 77.3, 181.9, and 363.6 Hz were used. The spectral envelope was fixed by applying a filter with a bandwidth of 1F0 and centered on 850, 2000, or 4000 Hz, preserving only unresolved components. In the second test, BNH listeners discriminated a diotic pure tone from the same pure tone with a phase difference between the two ears. Frequencies of 500 and 850 Hz were used. All listeners were also tested on a battery of cognitive tests, assessing processing speed, short-term/working memory, and non-verbal IQ. The aims were to: (i) study the relationship between monaural and binaural TFS processing within the same listener(s); (ii) investigate the contribution of cognitive abilities to TFS processing; (iii) specify the age above which TFS processing starts to deteriorate; and (iv) assess the role of hearing sensitivity at high frequencies (> 4kHz) on TFS processing at low frequencies. (Work supported by Oticon and MRC UK.)

Presenter: Christian Fullgrabe, Medical Reserach Council, UK
Co-Authors: Brian Moore

Back to Top >


Day: Monday     Poster Number: 13
Title: Neurocognitive and Indexical Processing Related to Accuracy on PRESTO, a High-Variability Speech Perception Test

Abstract:
Speech perception in adverse listening conditions, which include settings with multiple talkers--social situations, pose serious problems for listeners. Individuals of all ages and linguistic backgrounds vary greatly in their ability to understand speech in such environments. The current study investigated the hypothesis that individual differences in cognitive resources were associated with the range of speech perception abilities. Accuracy on a new, high-variability sentence test - PRESTO, was assessed for 100 young, normal-hearing individuals. Participants - although all healthy, young adults, demonstrated a wide range of performance on this speech perception test. Further analyses of participants who fell within the upper and lower quartiles of performance suggest that better speech perception was associated with greater short-term memory capacity, larger lexicon size, better ability to categorize regional dialects, and better self-reports of cognitive load management. Identification of the underlying core neurocognitive factors associated with better speech perception abilities has broad implications for improving communication skills for the aging population. In particular, the current study supports further exploration of the role of cognition and indexical training in improving speech communication and speech perception skills in a range of clinical populations. This is particularly important for the aging population, where cognitive resources and/or hearing sensitivity are known to decline.

Presenter: Jaimie Gilbert, Indiana University
Co-Authors: Terrin Tamati; David Pisoni

Back to Top >


Day: Monday     Poster Number: 14
Title: Older adults expend more listening effort than young adults recognizing audiovisual speech in noise

Abstract:
Objective: Using a dual task paradigm, two experiments were conducted to: 1) quantify the listening effort that young and older adults expend to recognize speech in noise when presented under audio-only (Experiment 1) and audiovisual conditions (Experiment 2) and, 2) determine the influence visual cues have on listening effort. Listening effort refers to the attentional and cognitive resources required to understand speech. Design: All participants performed a closed-set word recognition task and tactile pattern recognition task separately and concurrently. Accuracy and reaction time data were collected. The criterion for single task word recognition performance was set to 80% correct across experiments and across age groups. Study sample: For each experiment, 25 young and 25 older adults with normal hearing and normal (or corrected normal) vision participated. Results: Under equated performance conditions, older adults expended more listening effort than young adults with both audio-only and audiovisually presented speech. Furthermore, the processing demands of audiovisual speech recognition were greater than audio-only speech recognition for all participants. Conclusions: These results suggest that while visual cues can improve audiovisual speech recognition, they can also place an extra demand on processing resources with performance consequences for the word and tactile tasks under dual task conditions.

Presenter: Penny Gosselin, University of Toronto
Co-Authors: Jean-Pierre Gagné

Back to Top >


Day: Monday     Poster Number: 15
Title: Young and old listeners’ perception of English-accented speech in a background of English- and foreign-accented babble

Abstract:
The current study investigated the effect of accent of background speakers for the intelligibility of target speech. While there are a number of studies showing that speech uttered in a foreign accent is more difficult to understand than speech uttered by a native speaker, particularly when heard in a noisy background, we are not aware of any studies on the effect of speaker accent of the background babble for intelligibility. Moreover, a few recordings of speech babble are widely used within the community of speech perception researchers. The speakers in these recordings speak with a particular speech accent which, in a particular testing situation, may or may not match the accent of the recorded target speaker, and the accent of the particular listener group. Whether a mismatch between these variables affects speech intelligibility is unknown. In the current study, all background speakers spoke English, but either with a British English (BE) or American English (AE) accent, or with an Indian (Ind)or Italian (Ital) accent. All target sentences were spoken in BE. Listeners were either BE or AE speakers. We measured the intelligibility of final words in low and high predictability sentences that were masked by babble differing both in the number of voices (1, 3 or 8) and in the accent of English. We tested young (n=88, mean age: 21.3 years) and old (n=48, mean age: 66.12 years) listeners with normal hearing. Results showed that the advantage for high- over low-predictability sentences was similar for young and old listeners. Additionally, both age groups showed the biggest decrease in intelligibility of the target sentences for Ind-accented background babble. Furthermore, both age groups showed a decrease in intelligibility when the number of background speakers increased from one to three. However, this decrease was much more pronounced in young than old listeners in the BE condition. When the accent of the background speakers was either Ital- or Ind-accented English, young listeners' intelligibility decreased not only from one to three speakers, as it did in older listeners, but also from three to eight. These results indicate that the speech accent of the background babble can affect intelligibility of target speech, and that there are situations in which this effect can differ between young and old listeners. Implications for the use of standard babble recordings will be discussed.

Presenter: Antje Heinrich, University of Cambridge
Co-Authors: Antje Heinrich; Korlin Bruhn; Sarah Hawkins

Back to Top >


Day: Monday     Poster Number: 16
Title: Executive Function Prior to Cochlear Implantation in Younger and Older Post-Lingually Deaf Adults

Abstract:
Advanced age in itself is not a contraindication to cochlear implantation in post-lingually deaf adults. In fact, it is not uncommon for medically healthy individuals to receive a cochlear implant well into their 80's and early 90's. However, we are just now beginning to understand the complex relationship between speech perception and cognitive aging in individuals with cochlear implants. Recently, Waltzman et al. (2010) pointed to effects of cognitive aging in elderly adults who reported decreased hearing ability 20 years after cochlear implantation. The objective of the current study was to carry out a preliminary evaluation of pre-implant executive function of younger and older adults and its potential relationship with spoken word recognition after cochlear implantation. The Behavior Rating Inventory of Executive Function - Adult (BRIEF-A; Roth et al., 2005) was used to measure everyday behavioral manifestations of executive function prior to cochlear implantation in 9 younger adults, ages 32-66 years (M=52), and 9 older adults, ages 69-87 years (M=77). The respondent indicates if each of 75 statements is never, sometimes, or often a problem in the past 6 months across nine domains of executive function. The domains are combined to form three indexes. Spoken sentence recognition in quiet and noise, as well as spoken word recognition in quiet, prior to implantation and within 1 year following device hook-up was assessed through chart review. Older adults had significantly more problems initiating tasks and planning/organizing tasks than younger adults. Further, compared to the normative sample, older, but not younger adults had more difficulties initiating, planning/organizing, and shifting fluently between tasks and ideas. In order to assess the relationship between executive function, demographic variables, and word recognition measures, the younger and older groups were combined. The longer adults experienced deafness prior to cochlear implantation the more problems they reported with inhibitory control, emotional control, and behavior regulation. As a group, participants who struggled to understand sentences in noise also reported problems with planning/organizing tasks. These preliminary data suggest that evaluating executive function of older adults is a fruitful avenue for future research on understanding the additional difficulties experienced by elderly cochlear implant users.

Presenter: Rachael Holt, Indiana University
Co-Authors: Jessica Beer; Tera Quigley; David Pisoni

Back to Top >


Day: Monday     Poster Number: 17
Title: Functional MRI studies of short-term memory processing in older adults with hearing loss or tinnitus

Abstract:
In a previous functional magnetic resonance imaging (fMRI) study, we had used passive listening and active discrimination tasks to investigate the neural bases of hearing loss and/or tinnitus in older adults. We found that the discrimination task rather than the passive listening task was more useful in revealing differences among three groups of age- and gender-matched older adults: participants with hearing loss with tinnitus (TIN), participants with hearing loss without tinnitus (HL), and controls subjects with normal hearing without tinnitus (NH). Comparing the groups directly, we found decreased activation in the parietal and frontal lobes for the TIN group compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. These results suggest that a differential engagement of a putative auditory attention and working memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone. In order to further investigate the influence of hearing loss and tinnitus on auditory and working memory processing, we have begun a series of new experiments investigating auditory and visual attention processing using short-term memory tasks. The tasks incorporate either two (Lo-load) or three (Hi-load) stimuli in a discrimination paradigm. The visual stimuli are line drawings of isolated Korean fonts (meaningless to our participants) and the auditory stimuli are pure tones varying in frequency. In terms of behavior, the Hi-load task elicits poorer accuracy and longer reaction times compared to the Lo-load task. Preliminary results of 4 participants with hearing loss and tinnitus and 4 participants with normal hearing without tinnitus showed that the Hi-load task activated a distributed network of regions in the frontal, parietal and temporal cortices relative to the Lo-load task. The regions within the network varied depending on the modality. We are expanding our study by increasing the number of participants and recruiting those with hearing loss without tinnitus. (Work supported by the Tinnitus Research Consortium.)

Presenter: Fatima Husain, University of Illinois at Urbana-Champaign
Co-Authors: Jake Carpenter-Thompson; Kwaku Akrofi

Back to Top >


Day: Monday     Poster Number: 18
Title: Effects of noise, reverberation, and age on virtual localization

Abstract:
Digital signal processing has been used to develop tests to measure virtual localization ability in realistic listening situations and environments for individuals with and without hearing loss. The present study was designed to compare the effects of increasing reverberation and noise on the ability of young, middle-aged, and older adult listeners with normal hearing to localize a speech signal using these tests. 35 subjects, 11 young (20-35 years old, mean 27), 14 middle-aged (45-57 years old, mean 54), and 10 older (60-74 years old, mean 65) listeners with normal hearing participated in this study. The Spatial Localization in Quiet (SLIQ) and Spatial Localization in Noise Test (SPLINT) were administered in an anechoic environment and in reverberant environments with RT60s of 0.6, & 0.9 sec. The SPLINT was presented at signal-to-noise ratios of 0, -4, & -8 dB. These tests measure the accuracy with which listeners can locate nine virtual sound sources ( 90° to -90° azimuth) spaced 22.50 apart. The signal is a three-word phrase (mark the spot) presented at 70 dB SPL for a source at 00; the masker is speech spectrum noise. Results indicate a small but significant effect of age on virtual localization. Both increasing reverberation and decreasing SNR have a significant effect on performance for all three groups of subjects. The combined effect of noise plus reverberation is significantly greater than the effect of either one alone. [Work supported in part by a grant from the ASHA Foundation to Y. Zheng.]

Presenter: Janet Koehnke, Montclair State University
Co-Authors: Janet Koehnke; Joan Besing; Yunfang Zheng; David Cooper; April Ferise; Ilse Wambacq

Back to Top >


Day: Monday     Poster Number: 19
Title: Evaluation of pupil size as an indicator of listening effort

Abstract:
Listening to speech in noisy environments can be exhausting, especially for older adults who often abandon well-fit hearing aids. Objective measures of effort could help establish the efficacy of hearing loss interventions. Pupil size is a physiological measure that varies with task load and is thought to index cognitive effort. As part of an ongoing study, pupillometry was employed to further characterize communication and other benefits of speech-perception training (Humes et al., 2009). Using the Visual World Paradigm, we monitored pupil size as older adults with hearing loss identified single-syllable words presented at varying signal-to-noise ratios (SNRs). Participants selected the target from a set of four orthographically presented options that included fillers and, for a subset of "competitor" trials, a foil that shared a consonant cluster and vowel sound with the target. Speech and noise were spectrally shaped for each participant to assure audibility and simulate aided listening. Participants correctly identified significantly fewer words in competitor than in non-competitor trials. Although performance within competitor trials did not vary with SNR, pupil size was significantly larger in the more challenging SNR, suggesting that pupil size provides more information about listening effort than word recognition scores alone. Additionally, participants who scored lower on standardized measures of grapheme-to-phoneme decoding and working memory had larger pupil sizes when correctly identifying words, even in the easiest SNR condition. These results demonstrate that pupil size is a sensitive indicator of listening effort and varies with individual differences in cognitive abilities that support word recognition. (Work supported in part by a DRF Centurion Clinical Research Award, Medical University of South Carolina, and Indiana University.)

Presenter: Stefanie Kuchinsky, Medical University of South Carolina
Co-Authors: Judy Dubno; Larry Humes; Jayne Ahlstrom; Stephanie Cute; Mark Eckert

Back to Top >


Day: Monday     Poster Number: 20
Title: Recognition of degraded speech and text: Effects of age, hearing loss, and modality

Abstract:
This ongoing study examines the ability of young listeners with normal hearing and elderly listeners with and without hearing loss to make use of partial information in degraded speech. Recognition of spoken words was compared across three conditions which degraded speech by temporal interruption, spectral interruption, or mixing with speech-shaped noise. Spectral shaping was applied to the stimuli presented to the elderly group with hearing loss to ensure that the effect of audibility did not confound results. In order to determine the extent to which performance in auditory tasks relate to non-modality-specific cognitive processes, parallel measures of visual text recognition were also obtained. Specifically, visual text was degraded in one of two ways: masking with a bar pattern, or changing the contrast ratio. Preliminary correlational analyses, based on data for 6 of 45 subjects to be tested, show strong and significant correlations between performance on tasks of the same modality. Moderate and significant correlations were also noted between performance on each of the auditory tasks and one of the visual tasks. Further data collection is underway and analyses will be updated at study completion for this presentation. (Work supported, in part, by NIH grants T32 DC000012 and R01 AG008293.)

Presenter: Vidya Krull, Indiana University
Co-Authors: Larry Humes; Gary Kidd

Back to Top >


Day: Monday     Poster Number: 21
Title: Effects of Age-Related Hearing Loss in Electrophysiological and Behavioral Masking-Level Differences

Abstract:
Smaller masking-level differences (MLDs) in elderly people compared to younger people were found in previous studies that used behavioral measures. These results suggested that deficits of the central auditory system influence the magnitude of the MLD; however, peripheral hearing loss is commonly found in older listeners and may also contribute to the diminished MLDs. Thus, the purpose of this study was to investigate the effects of age and hearing sensitivity on auditory temporal processing by comparing behavioral and electrophysiological (AEP) MLDs between young and older listeners. To achieve this goal, high-pass noise was used in young normal listeners to simulate the mean high-frequency hearing loss for people aged 65-75 years ((Brant & Fozard, 1990 ; Gates et al, 1990). Both behavioral and AEP thresholds in SoNo and SpiNo conditions were tested. In the behavioral tests, results show that elevated thresholds (SoNo and SpiNo) and reduced magnitudes of the MLDs were found for both older adults and young adults with simulated hearing loss. Furthermore, the smaller MLD magnitudes in older group and simulated hearing loss group were due to the higher threshold elevation in SpiNo condition. These results are compatible with the findings of previous behavioral studies (Hall, Tyler & Fernandes, 1984; Grose, Poth &Peter, 1994). For the AEP tests, the effects of hearing loss were also found in the AEP thresholds for the young listeners with the simulated hearing loss. However, AEP results in older group show the similar results as young normal group does; age effect was mot found in AEP results. Because the age and hearing loss effects were found in behavioral results, the discrepancies between the results of behavioral and AEP indicated that cognitive functions of higher levels of brain may involve to the behavioral MLD.

Presenter: Yi-Chi Lo, University of Wisconsin-Madison
Co-Authors: Cynthia Fowler; Elizabeth Leigh-Paffenroth

Back to Top >


Day: Monday     Poster Number: 22
Title: The effects of age and type of distractor on the comprehension of a heard lecture

Abstract:
A previous study (Schneider et al., 2000) found that age-related changes in hearing are responsible for most, if not all, of the comprehension difficulties older adults experience when listening to a lecture in a background of unintelligible babble. This finding was somewhat unexpected given that older adults are often thought to be more susceptible to distraction than younger adults. To see whether the adverse effects of a competing background depend on (a) its intelligibility, and (b) its perceptual and conceptual similarity to the target lecture, we had 16 younger and 16 older adults listen to a lecture with either babble (many people talking simultaneously) or a competing lecture in the same voice as the target lecture playing in the background. After adjusting for individual differences in hearing, target lectures were presented at two different signal-to-noise ratios (average SNRs of -2 and -8 dB). Immediately following each lecture, participants answered a series of questions concerning the lecture they had just heard. The results were: (1) younger and older adults performed equally well under all conditions; (2) performance was worse when the background was a competing lecture as opposed to babble; (3) performance decreased as distractor level increased when the background was babble but not when it was a competing lecture in the same voice. It was argued that the degree of interference from an energetic masker depends on the SNR whereas the degree of interference from a meaningful masker depends on its perceptual similarity to the target speech.

Presenter: Zihui Lu, University of Toronto Mississauga
Co-Authors: Meredyth Daneman; Bruce Schneider

Back to Top >


Day: Monday     Poster Number: 23
Title: Influence of Signal Bandwidth and Signal-to-noise Ratio on Speech Perception in Noise

Abstract:
Older listeners often have greater difficulty understanding speech-in-noise compared to younger listeners with similar audiometric thresholds. This is especially true when the speech signal is disrupted in some manner [e.g., with respect to semantic cues (Tun et al., 2002)]. The reduction of spectral cues is another way to disrupt a speech signal and often reduces intelligibility of speech especially in background noise. We hypothesized that older listeners would be at a greater disadvantage when forced to listen to low-pass filtered speech in noise compared to younger listeners. SNR loss was measured using the QuickSIN with full-bandwidth and low-pass filtered stimuli with younger and older listeners with normal-audiometric thresholds. Contrary to our original hypothesis older listeners showed less of a decline in SNR loss (falling within a moderate SNR loss category for the filtered speech condition) compared to young normal-hearing listeners (who were within a severe SNR loss category for the filtered speech condition). Based on these results we propose an alternative hypothesis that older listeners use a compensatory listening strategy while listening in noise, because they may not have access to higher frequency cues in the speech signal due to an aging central auditory pathway. Therefore, when listening to the filtered speech signal older listeners' performance does not decrease to the same degree as the young normal-hearing listeners because older listeners do not have access to the high frequency cues to begin with. We will test this alternative hypothesis further while examining speech recognition in noise in both younger and older listeners with normal-audiometric thresholds while listening to unfiltered and several low-pass filtered speech signals.

Presenter: Efoe Nyatepe-Coo, Northwestern University
Co-Authors: Lauren Calandruccio; Patrick Wong; Sumitrajit Dhar

Back to Top >


Day: Monday     Poster Number: 24
Title: The Speech Understanding in Noise (SUN) test: an application of intervocalic consonants as a screening tool.

Abstract:
Increasing evidence indicates that screening and early treatment of hearing disability in adults have the potential to significantly improve the quality of life on both a personal and psychosocial ground, and extend the functional status of the adult population. Ever-growing advances in research and many recent initiatives and projects have demonstrated that effective approaches for adult hearing screening need to be targeted to the everyday listening difficulties. Hearing ability in adults is the result of powerful inter-system connections between perceptual and cognitive processes, whereby speech communication requires, concurrently, the ability to hear, to listen, to comprehend, and to communicate. It is widely documented that the most common complaint of adults with self-reported hearing disability is the difficulty to understand speech in challenging listening situations, such as background noise, reverberation, or competing speech, and that the major difficulties are typically experienced with the recognition of fast transients and consonants. In this framework, speech-in-noise tests (and, in particular, consonant and nonsense syllable tests), which are sensitive to declines both in hearing sensitivity and distortion, are promising techniques to screen adults for speech communication ability. We developed a speech-in-noise consonant recognition test that fulfills design criteria relevant to adult hearing screening such as short test duration, feasibility for use in non clinical environments, portability, acceptability, minimal perceived complexity, and so on. The Speech Understanding in Noise (SUN) test was developed in different languages and validated in an overall population of more than 3200 adults and older adults. Results obtained both in controlled ambient noise (default settings) and in high ambient noise (field testing) show the potential of the SUN test to be used for adult hearing screening. [This work was performed in the framework of the European project "AHEAD III: Assessment of Hearing in the Elderly: Aging and Degeneration - Integration through Immediate Intervention" (2008-2011) (FP7, contract No. HEALTH-F2-2008-200835)]

Presenter: Alessia Paglialonga, Italian National Research Council (CNR)
Co-Authors: Gabriella Tognola; Ferdinando Grandori

Back to Top >


Day: Monday     Poster Number: 25
Title: Adult Age and Individual Differences in Attentional Control of Auditory Perception: Behavioral, Electrophysiological and Genetic Findings

Abstract:
In daily conversational situations different sound sources compete for the brain's limited processing resources; attention is, thus, needed to select relevant from irrelevant information. The attentional demand varies with the relative perceptual saliency of the competing auditory inputs (Desimone & Duncan, 1995; Miller & Cohen, 2001). Attentional control involves fronto-striatal circuits and depends, in part, on striatal dopamine (Vernaleken et al., 2006). Variation in the DARPP-32 genotype is associated with striatal dopaminergic neurotransmission and fronto-striatal connectivity (Meyer-Lindenberg et al., 2007). In the present study we investigated how aging alters the dynamic interplay between perceptual saliency and attentional control and how these age differences might be reflected in underlying electrophysiological correlates. Furthermore, we investigated whether individual differences in the DARPP-32 genotype are related to individual differences in attentional control. Twenty-five younger and 26 older adults were tested in a variant of the dichotic listening task in which the relative perceptual saliency of two different syllables presented simultaneously to both ears was varied as well as the attentional instruction, which was manipulated by instructing the participants to focus either to the right or left ear. The behavioral results showed that older adults modulated their attention less flexibly than younger adults and were more driven by perceptual saliency. The electrophysiological results showed a late negative going deflection in the ERP wave (N450) at frontal and parietal electrodes in younger adults that was sensitive to the degree of attentional demand and correlated with task performance. The ERP wave of older adults did not reveal an identifiable N450 deflection. These results are in line with previous evidence for the involvement of the fronto-parietal attentional network in the performance of a similar task in younger adults and may reflect an age-related deficit in recruiting this network according to attentional demands. In younger adults, individual differences in DARPP-32 genotype were associated with task performance and N450 modulation effect. The lack of genetic effects in older adults is most likely due to the very restricted range in their attentional control.

Presenter: Susanne Passow, Max Planck Institute for Human Development Berlin
Co-Authors: René Westerhausen; Isabell Wartenburger; Hauke Heekeren; Ulman Lindenberger; Shu-Chen Li

Back to Top >


Day: Monday     Poster Number: 26
Title: Probable age-related auditory neuropathy in "younger" and "older" elderly persons

Abstract:
Objective: Audiological data from an epidemiological investigation of elderly persons were studied. Specific diagnoses of oto-audiological disorders were searched for. Extremely poor speech recognition was noticed. Study sample: Three age cohorts from the Gothenburg Gerontological and Geriatric Population study, representative of the general population, were included. Two cohorts of 70- and 75-year olds represented "younger" elderly persons (n: 474). "Older" elderly persons consisted of an 85-year cohort, n: 252. Methods: Clinical pure tone and speech audiometry was used, and data from medical files were included. Air conduction thresholds were measured from 0.125 to 8 kHz using the ascending method. Bone conduction thresholds were measured for the frequencies 0.5 to 4 kHz. The speech reception test consisted of phonemically balanced (PB) monosyllabic words in Swedish. We used strict definitions of poor speech recognition related to the pure tone audiogram, modified from the classic signs of retrocochlear hearing loss according to Lidén (1954). Results: Severely impaired speech recognition, reflecting probable age-related auditory neuropathy, was found in 0.4% in the "younger" group, and in 10% in the "older" group. In 21 of 24 cases the poor speech recognition was unilateral. Sensorineural hearing loss, other than age-related hearing loss, noise induced hearing loss, and probable age-related auditory neuropathy, was diagnosed in 3.4 % of the "younger" elderly persons and in 5.2% of the "older" ones. Conductive hearing loss was diagnosed in 6.1%, and 10.3% respectively. Bilateral functional deafness was present in 3.2% of the 85-year-old persons, but was not present in the 70-75-year group. Conclusion: The incidence of probable age-related auditory neuropathy increases considerably from 70-75 year to 85 years of age. There are marked differences between "younger" and "older" elderly persons regarding hearing loss that severely affects auditory function.

Presenter: Ulf Rosenhall, Karolinska University Hospital
Co-Authors: Christina Hederstierna; Esma Idrizbegovic

Back to Top >


Day: Monday     Poster Number: 27
Title: Effect of Age on Static and Dynamic Spectral Pattern Discrimination

Abstract:
Past work has shown a relationship between the ability to discriminate spectral patterns and measures of speech intelligibility in clinical subject groups. Current work evaluated discrimination ability for both static and dynamic spectral patterns characterized by low-rate modulation. In the static condition, the ability to detect a change in the phase of a sinusoidal spectral ripple of wideband noise was measured with ripple density constant at 1.5 cycles per octave. The dynamic condition determined the signal-to-noise ratio needed to discriminate 1-kHz pure tones frequency modulated by different 5-Hz lowpass noise samples drawn from the same underlying noise distribution so discrimination was based on the temporal pattern of fluctuation. Both conditions used a modified descending method of limits with test stimuli pre-recorded on a CD for clinic use. Data were collected from 45 listeners (age: 22 to 85 years) with hearing sensitivity ranging from normal to a mild-to-moderate sensorineural hearing loss. Compared to younger subjects, thresholds from older listeners were elevated in both conditions with no effect of hearing loss. QuickSIN speech-in-babble thresholds were significantly correlated with both static and dynamic measures, suggesting clinical utility of the procedures in the context of speech processing ability. [Supported by NIDCD.]

Presenter: Stanley Sheft, Rush University Medical Center
Co-Authors: Valeriy Shafiro; Robert Risley

Back to Top >


Day: Monday     Poster Number: 28
Title: Intelligibility of speech produced by older talkers

Abstract:
It is now well established that older adults experience more difficulty than young adults when processing speech embedded in noise or multi-talker babble, as well as in the absence of contextual cues. The ability to understand fast or temporally altered speech and to detect and recognize brief non-speech and speech sound sequences also decline with age. These effects seem to persist even when hearing loss is taken into account. However, very little is known about how age-related changes affect speech produced by older talkers. The current study aims to fill this gap by examining whether speech intelligibility and global and segmental speech patterns differ between older and younger talkers. Specifically, we compared the intelligibility of conversational and clear speech for sentences produced by older and younger adults. Six older adults, 3 female and 3 male, with mild to moderate hearing loss participated in the study. Their ages ranged from 64 to 78 years (mean = 70.2). A sentence-in-noise perception task, performed by forty-eight young adults, revealed that older adults implemented conversational-to-clear speech adaptations that increased intelligibility. Importantly, though, their overall speech intelligibility was lower than that of young adult speech and their clear speech intelligibility gain was smaller (young adult data reported in Smiljanic and Bradlow, 2005). Acoustic correlates of the conversational-to-clear speech modifications produced by older and younger talkers were explored. The analyses focused on temporal speech properties, including overall speaking rate, tense and lax vowel duration, vowel duration before voiced and voiceless coda consonants, and voice onset time. Relative to the young adult productions, older adults' speech was slower overall and the segmental contrasts were less exaggerated in clear speech. These results demonstrated that, in addition to perceptual problems, cognitive and hearing changes due to age impact elderly adults' speech patterns and intelligibility.

Presenter: Rajka Smiljanic, University of Texas at Austin
Co-Authors:

Back to Top >


Day: Monday     Poster Number: 29
Title: Age, hearing loss and cognition: susceptibility to hearing aid distortion

Abstract:
Hearing aids use complex processing intended to improve speech recognition. While many listeners benefit from such processing, it can also introduce distortion that offsets or cancels intended benefits for some individuals. This study focused on distortions caused by frequency compression. Participants included 40 adults ranging in age from 60 to 95 years, with hearing thresholds from normal hearing to moderate sensorineural loss. Intelligibility scores and quality ratings were obtained for low-context sentences presented in quiet and in babble at a range of signal-to-noise ratios, and with varying amounts of frequency compression (cutoff frequency and compression ratio). A test of working memory (Reading Span Test) was measured for each participant. Ability to recognize frequency-compressed speech was significantly related to amount of hearing loss and to working memory. This effect was most pronounced in high-distortion situations (i.e., more frequency compression and/or in background noise). Working memory did not affect speech quality ratings. In combination with previous work on working memory and wide-dynamic range compression processing, our data suggest that poor working memory may result in susceptibility to multiple forms of hearing-aid distortion. [Work supported by NIH and GN Resound]

Presenter: Pamela Souza, Northwestern University
Co-Authors: Pamela Souza; Kathryn Arehart; James Kates; Naomi Croghan; Namita Gehani; Ramesh Muralimanohar; Eric Hoover

Back to Top >


Day: Monday     Poster Number: 30
Title: Temporal Factors in Multisensory Integration Across the Lifespan

Abstract:
The ability of perceptual systems to integrate sensory inputs depends on the timing of these inputs. Integration occurs for stimuli within a limited temporal range - the temporal binding window (TBW). We explore how the TBW changes across the lifespan, from ages 18-79, and correlate the TBW to illusory measures of integration, including speech. The TBW was measured via a simultaneity judgment (SJ) task in which participants were presented with A and V stimuli at varying SOAs from -400 to 400 ms (positive = visual first). Older adults showed a lower perceptual binding with synchronous presentations, but were more likely to perceptually bind highly asynchronous presentations, resulting in a wider TBW. These results suggest a decrease in the impact of temporal coincidence on multisensory integration with healthy aging. Two illusory measures of integration were also collected - the McGurk effect and the sound induced flash illusion (SIFI). Older adults were less likely to perceive the McGurk effect reflecting the decreased probability of integration with synchronous presentations in the SJ tasks. Older adults showed a stronger SIFI illusion, suggesting that the asynchronous auditory presentation had increased influence on their visual perception. The effect was strongest with 4-beep presentations with inherently high levels of asynchrony, reflecting the increased integration of asynchronous presentations with age, as seen with the wider TBW as measured by the SJ task. These data show that the width of the TBW increases with healthy aging, and impacts the integrative ability of individuals.

Presenter: Ryan Stevenson, Indiana University
Co-Authors: Brannon Mangus; Juliane Krueger Fister; Justin Siemann; Andrea Hedley-Williams; Robert Labadie; Mark Wallace

Back to Top >


Day: Monday     Poster Number: 31
Title: An fMRI Study of Age-Related Changes in Sustained Attention and Word Recognition

Abstract:
Older adults often experience speech recognition difficulties in challenging listening environments, which may be accompanied by increased fatigue related to listening effort. To characterize attention-related systems that support word recognition in noise using functional neuroimaging, adults (20-79 years old) with normal and impaired hearing listened to a list of 120 words presented in multitalker babble at two signal-to-noise ratios ( 3 dB and 10 dB) and repeated aloud each word. fMRI analyses revealed a network of right inferior frontal, anterior cingulate, and posterior temporal regions that exhibited declining activity across the duration of the 26 minute listening task. Regression analyses revealed that activation in these regions was significantly correlated with age, such that greater declines in activation during the listening task occurred in younger adults. Analyses further demonstrated that sustained engagement of these regions was related to improved word recognition. These findings are consistent with the view that sustained engagement of the brain's error monitoring and salience network reflects increased listening effort required to maintain good communication in challenging environments. Work supported by NIH/NIDCD.

Presenter: Kenneth Vaden, Medical University of South Carolina
Co-Authors: Stefanie Kuchinsky ; Stephanie Cute; Jayne Ahlstrom; Judy Dubno ; Mark Eckert

Back to Top >


Day: Monday     Poster Number: 32
Title: Neurophysiologic aging effects for semantic priming in quiet and babble.

Abstract:
It has been documented behaviorally that background noise has a greater impact on speech perception in middle-aged than young listeners. Older adults have been shown to rely more on semantic context during speech perception in difficult listening situations. However, the impact of noise and semantic context on the neurophysiological marker (N400) of semantic processing has not been explored in prescenescence. In the present study we examined auditory event-related potentials (ERPs) using a semantic priming paradigm in quiet and in background babble for young and middle-aged monolingual English speaking adults with normal hearing. Participants listened to English modified SPIN sentences in which the final word was semantically related or unrelated to the stem of the sentence. Analyses of the N400 effect revealed a significant listening condition (quiet vs. babble) by group (young vs. middle-aged) interaction. As shown previously in the literature, a significant difference was found between young and middle-aged adults while listening in quiet. However, the N400 effect was similar for young and middle-aged adults in background babble. Moreover, only young adults showed a change in the N400 effect as a function of listening condition. Even though middle-aged adults showed evidence of aging for semantic processing in quiet listening conditions, their reliance on semantic context was not adversely impacted by the presence of background noise.

Presenter: Ilse Wambacq, Montclair State University
Co-Authors: Caitlin Chauvette; Dana Skerlick; Martha Ann Ellis; Joan Besing; Janet Koehnke

Back to Top >


Day: Tuesday     Poster Number: 33
Title: Executive Function Contributions to Sentence Processing in Aging

Abstract:
Older adults have difficulty in auditory sentence comprehension with increased processing demands, such as syntactically complex structures (e.g., Goral et al., in press) or implausible semantic information (e.g., Obler et al., 1991). In this study, we examine whether specific aspects of executive function (EF) predict healthy older adults' performance on plausible and implausible sentences with varying levels of syntactic complexity. Forty young adults (18-35 years) and 40 healthy older adults (55-75 years) participated in the study. The experiments involved 1) two auditory sentence-comprehension tasks (Goral et al., in press), one with embedded clauses, the other with multiple negatives; and 2) an on-line information processing task (Marton, 2007) measuring three EFs: working memory, inhibition, and attention-switching. Results show that the age-related accuracy performance differences are greatest for all participants for implausible sentences with the most complex syntactic structures (object relative clauses, two negatives). Multiple regression analyses further indicate that language processing performance was predicted by underlying EFs, although specific predictors differed based on age and task. For the embedded sentence task, RT in the inhibition task approached significance (p= .06) as a predictor for comprehension speed (RT) in young adults, whereas RT in the attention-switching task was a strong predictor (p< .05) of comprehension RT for the object-relative, implausible condition among older adults. For the negative sentence task, accuracy in the inhibition task best predicted the accuracy performance in the 2-negative, implausible condition for older adults (p < .05), but no significant predictor was found for young adults. We conclude, therefore, that semantic plausibility and syntactic complexity increase sentence processing loads on the comprehension tasks and that processing abilities differ between age groups. We further argue that different cognitive mechanisms support language processing performance across groups. While inhibition is linked to auditory comprehension in both groups, attention-switching appears to contribute to performance specifically in older adults.

Presenter: Jungmee Yoon, CUNY Graduate Center
Co-Authors: Luca Campanelli; Naomi Eichorn; Loraine Obler; Klara Marton; Mira Goral

Back to Top >


Day: Tuesday     Poster Number: 34
Title: Combined Neural and Reaction Time Measures of Audiovisual Integration Efficiency

Abstract:
This research involves an investigation of the neuro-cognitive mechanisms underlying audiovisual integration efficiency in speech recognition. Speech perception is a multimodal process engaging both auditory and visual modalities (McGurk and MacDonald, 1976; Sumby and Pollack, 1954). Sumby and Pollack (1954) demonstrated in a seminal study that lip-reading enhances accuracy scores across multiple auditory signal-to-noise (S/N) ratios. Although traditional accuracy-only models of audiovisual integration (e.g., Braida, 1991; Massaro, 2004) adequately predict audiovisual recognition scores and integration efficiency (Grant, Walden, and Seitz, 1998), they fail to specify the real-time dynamic mechanisms underlying integration. Limitations of traditional modeling approaches thus motivated the use of (non-parametric) statistical and experimental tools. We utilized a reaction time (RT) measure known as workload capacity (Townsend and Nozawa, 1995) to quantify audiovisual integration efficiency in the behavioral domain. The capacity measure was used to compare RT distributions obtained from the audiovisual condition in spoken word identification tasks, to the auditory-only and visual-only RTs. Three auditory listening conditions were utilized: clear auditory signal, -12 dB, and -18 dB SPL. We also obtained EEG recordings in conjunction with accuracy and RT data to better understand how dynamic brain signals relate to behavioral information processing measures. First, the behavioral results showed efficient audiovisual integration, measured by a workload capacity coefficient greater than 1, across low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. Interestingly, ERP component analyses indicated increased audiovisual enhancement (AV vs. A-only and V-only) in left parietal/temporal and frontal scalp for lower auditory S/N ratios (higher capacity/integration efficiency) relative to higher S/N ratios (low capacity/inefficient integration). Benefits of combined EEG/RT investigations include obtaining generalized measures of integration efficiency in speech perception, which shall provide potentially useful benchmarks of performance for normal-hearing and clinical populations.

Presenter: Nicholas Altieri, Indiana University
Co-Authors: Michael Wenger

Back to Top >


Day: Tuesday     Poster Number: 35
Title: Is the influence of auditory training on behavioral and electrophysiological measures of temporal resolution different in younger and older adults?

Abstract:
The deterioration in the ability to process rapid changes in auditory input may play a primary role in the difficulty many older listeners experience perceiving speech. In young adults, perceptual skills have been found to improve with practice. However, there is very little information on the degree to which practice improves performance on perceptual tasks in older adults. Twelve younger and 12 older adults are in the process of being trained for 10 sequential 1-hour sessions on a between-channel gap detection- task (a gap between 2-kHz and a 1-kHz noise bands). Performance on this task is assessed one day and one month after the last training session. We are also testing the extent to which the benefits of training generalized to other frequencies and to the untrained ear. In addition, pre- and post-training Evoked Potential Response (ERP) measures are being obtained to assess cortical changes in the response to temporal gaps induced by training.

Presenter: Meital Avivi- Reich, University of Toronto (Human Communication Lab)
Co-Authors: Bruce Schneider

Back to Top >


Day: Tuesday     Poster Number: 36
Title: Age-Related Changes in Auditory Processing of Speech-like Stimuli Assessed at the Population Level

Abstract:
Loss of temporal processing in elderly human listeners due to central auditory pathway deficits may result in loss of speech perception with age. This study aims to further understand speech like stimuli in aging using non-invasive electrophysiological measurements in a rat model system, which can be complemented by data obtained from single unit recordings. Frequency following responses (FFRs) were obtained for speech stimuli which change only in their voice onset time (/ba/ vs /pa/) presented to aged (20-22 months old) and young (3-5 months old) Fischer-344 rats in quiet and in the presence of background noise. The envelopes of these speech sounds transposed onto a noise carrier or onto a high frequency carrier tone were also presented to the animals in order to study complex envelope representations throughout the cochleotopic map. The FFRs of opposite polarities were summed or subtracted to isolate the responses to the envelope and to the fine structure, respectively. These responses were compared with responses to periodic stimuli like sinusoidally amplitude modulated (sAM) and frequency modulated (sFM) tones, as well as sAM tones with various rise times. We have previously demonstrated that aged animals show decreased fidelity in envelope shape processing as well as reduced processing of temporal cues. Preliminary results from responses to speech stimuli suggest that representation of the envelope is degraded by noise more than representation of the fine structure. Since our previous data indicate that the representation of envelope cues is already reduced in aged animals, further degradations in the envelope representation by noise would be expected to produce greater perceptual difficulties in aged individuals. Comparative differences in processing of envelope and fine structure between young and aged animals, in quiet as well as in the presence of background noise are discussed. Work supported in part by the American Federation for Aging Research.

Presenter: Edward Bartlett, Purdue University
Co-Authors: Aravindakshan Parthasarathy

Back to Top >


Day: Tuesday     Poster Number: 37
Title: Markers of inhibitory and excitatory synaptic function and their relationship to auditory-evoked responses in young and aged animals

Abstract:
Knowledge of the causes of presbycusis within the periphery outstrips what is known about hearing loss caused by changes within the central nervous system. This research sought to further elucidate the underpinnings of presbycusis within the ascending auditory pathway in a rat aging model. It is known that GABAergic inhibition declines with age, and there is some evidence that excitatory glutamatergic synapses may also be affected. However, it is unknown whether changes in excitatory synapses are solely due to changes in ascending pathways, which are marked by the presence of the vesicular glutamate transporter 2 (VGluT2) in their terminals. Specifically, this research tested the hypothesis that differences in concentrations of markers for inhibitory and excitatory neurotransmitter function could be correlated with auditory-evoked responses. Young (3-5 months) and aged (22-24 months) Fischer 344 rats were tested for phasic and sustained auditory brainstem responses (ABR) in order to examine their hearing sensitivity and temporal acuity. Brains from the recorded animals were then coronally sectioned at thirty micron intervals and immunohistochemically stained for GAD65/57 and VGluT2. Relative optical densities and fractional densities of the ROIs were compared between young and aged animals and compared to the auditory evoked responses from the same animals. Preliminary results indicate significant differences in GAD65/67 concentrations between young and aged animals. Relative optical densities for GAD65/67 indicated a decrease in protein concentration as well as a decrease in fractional density with age. These data were correlated with higher auditory thresholds and poorer synchronized responses in aged animals. Preliminary data indicated that VGluT2 labeling showed differences between animals, but these were less clearly correlated with auditory function. Work supported in part by the National Science Foundation and the Howard Hughes Medical Institute and the American Federation for Aging Research.

Presenter: Edward Bartlett, Purdue University
Co-Authors: Zachery Fisher; Aravindakshan Parthasarathy; Stephanie Gardner

Back to Top >



Day: Tuesday     Poster Number: 38
Title: The Test of Rating of Emotions in Speech (T-RES): A Novel Tool to Evaluate the Impact of Prosody and Lexical Content on Emotional Judgments of Speech

Abstract:
When the sentence 'I am so happy' is spoken in an angry prosody (i.e., tone of voice), is it interpreted as happiness or anger? This portrays the complex interaction of the two dimensions that convey emotion in speech: lexical content and prosody. How is each dimension weighed perceptually? Is the rating of one impacted by the other? We present a comprehensive paradigm that examines the interplay between these two dimensions. Fifty lexical sentences, reliably associated with one of five emotions (anger, fear, sadness, happiness and neutral), were recorded spoken in five distinct prosodies. Each combination of emotions was presented twice, creating congruent, incongruent and neutral spoken sentences. Eighty participants were asked to rate the degree to which each spoken sentence expressed each emotion in the lexical content, prosody or both. Results show that "processing strategies" for integrating lexical and prosodic dimensions were dependent on the nature of the rated emotion.

Presenter: Boaz Ben-David, Toronto Rehab and University of Toronto
Co-Authors: Namita Multani; Nicole Durham; Pascal Van Lieshout

Back to Top >


Day: Tuesday     Poster Number: 39
Title: Language-Specific Tuning of Audiovisual Integration in Early Development

Abstract:
Perceptual narrowing is one way to characterize the age-related changes observed in infants’ ability to perceive phonemes from non-native languages (Werker, & Tees 1984, 2005). Infants are typically tested with single phonemes presented auditorily and with no accompanying visual stimuli (Werker & Tees, 1983; Kuhl, Williams, Lacerda, Stevens, & Lindblom, 1992). Understandably, this allows researchers to focus on sensitivity to the speech signal itself while removing all other perceptual influences. From a perceptual standpoint, however, broad multisensory abilities are selectively narrowed according to infants’ environmental experience. Thus, infants’ sensitivity to all forms of environmentally irrelevant information decreases while sensitivity to environmentally relevant information increases (Lewkowicz & Ghazanfar, 2009; Pons, Lewkowicz, Soto-Faraco, & Sebastián-Gallés, 2009; Scott, Pascalis, & Nelson, 2007). For typically developing infants, the emergence of phonemic sensitivity happens in the context of multimodal speech. The goal of the present study was to determine whether infants demonstrate perceptual narrowing for their native language relative to an unfamiliar language in the context of fluent audiovisual speech. Infants were tested using a serial audiovisual presentation looking time paradigm. A broad age range was included to capture individual differences in emerging sensitivity to audiovisual speech using this novel paradigm. In the procedure, infants sat on their guardians’ laps while viewing a video consisting of randomized blocks of the four audiovisual speech conditions. Stimuli were manipulated systematically to include either phonetically congruent or incongruent audio and visual speech in either the infants’ native or a non-native language. Meanwhile, synchrony between the audio and visual signals was matched across conditions. Sessions were video-recorded and infants’ looking behavior was coded off-line. Overall, there was a significant negative relationship between age and looking time for incongruent speech (p < .05), indicating that infants tended to look proportionally less at incongruent speech than at congruent speech as they got older. Infants looked proportionally more at blocks in which audiovisual stimuli were congruent than to those in which they were incongruent (p < .05). In particular, infants looked longer towards congruent English relative to the two incongruent conditions; no such effect emerged for Spanish. These results suggest that perceptual narrowing characterizes infants’ sensitivity to audiovisual speech as well as the auditory-only signal. This work extends the concept of perceptual narrowing to the domain of fluent audiovisual speech. Ongoing studies are examining the specificity of the narrowing process to either the auditory or visual signal in this bimodal context.

Presenter: Heather Bortfeld, University of Connecticut & Haskins Labs
Co-Authors:

Back to Top >


Day: Tuesday     Poster Number: 40
Title: Aging, Executive Functions, and Sentence Processing in Noise

Abstract:
Older adults have difficulty with sentence processing in noisy settings or when speech includes complex syntax or lacks redundancy. The goal of this study was to uncover cognitive and demographic factors contributing to this difficulty, specifically, whether executive functions (EF) may underlie some of the sentence processing difficulty in noisy settings. We tested 222 native English speaking adults ranging in age from 55-84, free from neurological deficits and dementia. Pure Tone Average and Speech Recognition Threshold tests assessed participants' hearing. We administered the Revised Speech Perception in Noise (SPIN) test and selected EF tasks. For the SPIN test, sets of sentences with simple syntax over a babble background were presented through headphones. Participants were asked to write the last word of each sentence. Sentences were administered in 4 different combinations of noise levels (high/low) and predictability of target word (high/low). Regression analyses showed that, depending on the condition, 32-45% of the variation in SPIN performance was explained by age, education, gender, and hearing. In addition executive functions provided a small but statistically significant contribution to performance. Stroop errors were significantly associated with level of accuracy in both high predictability conditions. Digit Ordering Span was significantly associated with level of accuracy in the high noise, low predictability condition. Thus, even for the simple syntax of SPIN materials, executive functions involving working memory, attention and inhibition contribute to sentence processing in noise. Older adults in whom executive functions are successfully recruited seem to be better able to process sentences under challenging conditions.

Presenter: Dalia Cahana-Amitay, Boston University
Co-Authors: Avron Spiro III; Martin Albert; JungMoon Hyun; Jesse Sayers; Emmanuel Ojo; Keely Sayers; Mira Goral; Loraine Obler

Back to Top >


Day: Tuesday     Poster Number: 41
Title: Neural encoding of simple and complex sounds declines with age

Abstract:
Older adults, even with clinically normal hearing sensitivity, have difficulty understanding speech in the presence of background noise. This difficulty may be partly due to age-related declines in the neural representation of sounds. To test this hypothesis, frequency-following responses (FFRs), which are dependent on phase-locked neural activity, were elicited using simple (tonebursts) and complex (consonant-vowel) sounds to see if aging affects the neural encoding of tonal stimuli frequently used in psychophysical studies, as well as a complex speech-like sound. Thirty one adults (ages 21 - 77) were tested. All participants had clinically normal hearing sensitivity (equal to or better than 25 dB HL at octave frequencies 0.25 - 8.0 kHz). FFRs were elicited by tonebursts in the 1000 Hz frequency range and analyzed using phase coherence and amplitude measures. Both types of stimuli yielded abnormal findings in aging adults. For tone-FFRs, phase coherence and amplitude showed significant decreases as age increased for frequencies at and slightly below 1000 Hz. For CV-FFRs, transient onset and offset response amplitudes were smaller as age increased. Offset responses were also delayed. Sustained responses at the onset of vowel periodicity were prolonged in latency and smaller in amplitude as age increased. Phase coherence of tone-FFRs significantly correlated with sustained CV-FFR measures obtained in the same individuals. These results suggest that the neural encoding of both types of stimuli are negatively affected by age and, even in the absence of significant hearing loss, synchronous firing patterns involving the brainstem start to deteriorate in middle age. (Work supported, in part, by NIH-NIDCD grants R01-DC007705 (KT), F31-DC010553 (CC), and T32-DC00033.)

Presenter: Christopher Clinard, James Madison University
Co-Authors: Kelly Tremblay; Ananthanarayan Krishnan

Back to Top >


Day: Tuesday     Poster Number: 42
Title: Impairment and recovery of memory for difficult to hear items

Abstract:
This study examines how difficult perception of spoken words affects later memory for those words. Furthermore, we test whether the effect of difficult perception on memory is altered when more time is given to process the information. Good-hearing participants were asked to listen to lists of words and perform immediate free recall. To reveal how difficultly with hearing can affect recall for the list, one word within each list was masked (making that word difficult, but not impossible, to identify). Recall accuracy was significantly reduced for the masked word, and also for non-masked word preceding it. We posit that associations between words are formed and strengthened during listening, and that these associations aid recall. Masking disrupts the formation of these associations, causing reduced recall for the masked and the prior, non-masked word. To observe the time-dependency of this effect, subjects also listened to lists that were presented at a slower presentation rate. As before, one word within these lists was masked to reveal whether recall is disrupted by difficult perception. For lists with a slower presentation rate, recall performance is completely recovered, and there is no effect of masking on memory. In the future we will test whether the negative effect of difficult perception on memory is also present in populations with poor hearing acuity. Among those who experience hearing loss, older adults are of special interest: their simultaneous declines in working memory may diminish or abolish the recovery effect we observed in this experiment.

Presenter: Katheryn Cousins, Brandeis University
Co-Authors: Paul Miller; Arthur Wingfield

Back to Top >


Day: Tuesday     Poster Number: 43
Title: Auditory Learning: Can Adult Cochlear Implant Listeners Learn to Hear Changes in Spectrally-Rippled Noise Stimuli?

Abstract:
Here we examined the use of auditory training exercises aimed at improving the perception of spectral details. Using a single-subject staggered baseline design, each subject completed baseline and training sessions with the onset of training being randomized across subjects. Sixteen implanted participants (six - Advanced Bionics; ten - Cochlear) participated in the experiment. Eight served in the experimental group (training and outcome measures), eight served as a control group that did not receive training (outcome measures only). Each participant in the training group participated in 32 hours of testing/training over the course of three months. Participants detected an inversion in the spectrum of a spectral ripple noise. The spectral density, number of ripples per octave, was varied to estimate a threshold. Thresholds were determined using an adaptive yes/no paradigm. Results showed significant improvements in ripple threshold for the trained group that generalized to improved speech reception threshold (SRT) in noise. [Supported by NIH grants F31DC010309, T32 DC005361, P30-DC04661]

Presenter: Kathleen Faulkner, University of Washington
Co-Authors: Kelly Tremblay; Jay Rubinstein; Lynne Werner; Kaibao Nie

Back to Top >


Day: Tuesday     Poster Number: 44
Title: Auditory factors associated with hearing aid benefit in older adults

Abstract:
This research aimed to study the association between peripheral hearing loss, performance in psychoacoustics tasks and hearing aid benefit in older adults. A total of 60 older adult, bilateral hearing aid users were selected. Evaluation procedures included a 500-Hz masking level difference (binaural interaction), low-passed double dichotic digits (binaural integration), pitch pattern sequence (temporal ordering), and the adaptive test of temporal resolution (temporal resolution), along with pure-tone audiometry. Outcome measures for hearing aid benefit included laboratory procedures such as hearing-in-noise and words-in-noise tests (aided and unaided), and a self-report tool, the Amsterdam Inventory for Auditory Disability and Handicap. Hearing aid benefit was defined as the difference in outcome measures results between the aided and aided condition. Correlations and bivariate models were computed in order to determine associations between the auditory factors and hearing aid benefit. Results indicated that pure-tone thresholds and temporal resolution were not correlated with hearing aid benefit. Binaural interaction, binaural integration, and temporal ordering were associated with some of the measures of hearing aid benefit. It is hypothesized that successful binaural fitting highly depends on subjects' performance on binaural integration and binaural interaction tasks.

Presenter: Adrian Fuente, The University of Queensland
Co-Authors: Louise Hickson

Back to Top >


Day: Tuesday     Poster Number: 45
Title: Speech-in-noise identification in elderly listeners with audiometrically normal hearing: Contributions of auditory temporal processing and cognition

Abstract:
The standard treatment for age-related hearing loss is via hearing aids. However, some of the reported perceptual deficits may be related to the age of the listener rather than hearing loss per se. Empirical evidence seems to support the link between an aging sensory/cognitive system and speech perception, but the nature of the deficit is still a matter of debate. Also, the effect of aging on speech processing has mainly been studied via group differences between elderly and young listeners with (often) only partially matched audiograms on a given task rather than by administering several auditory and cognitive tasks. Here, 21 elderly (age range 60-79 yrs) listeners with bilateral audiometrically normal hearing (≤20 dB HL) between 0.125 and 6 kHz (ENH), and eight young normal-hearing listeners (YNH) were tested on (i) speech-in-noise identification tasks (with consonants and sentences as targets), (ii) supra-threshold auditory processing tasks of temporal-envelope and temporal-fine-structure (TFS) cues, (iii) and cognitive tasks. Compared to YNH listeners, ENH listeners showed poorer speech identification in steady noise, amplitude-modulated noise, and speech babble, but not in quiet. Identification was particularly affected by speech babble spatially co-located with the target, but spatial release from masking was similar for the two groups. Sensitivity to TFS was worse for the ENH listeners, and was correlated positively with speech-in-noise perception. Cognitive abilities were generally lower for the ENH listeners, and correlated positively with speech-in-noise perception, even after partialling out TFS sensitivity. (Work supported by RNID and MRC UK.)

Presenter: Christian Fullgrabe, Medical Reserach Council, UK
Co-Authors: Brian Moore; Michael Stone

Back to Top >


Day: Tuesday     Poster Number: 46
Title: The effect of aging and hearing impairment on spatial processing

Abstract:
Older hearing-impaired adults often report greater difficulty hearing in noise than can be explained by reduced audibility of speech. The research investigated whether reduced spatial processing ability may be a cause of this difficulty. Spatial processing ability is defined as the capacity to focus on target sounds coming from one direction while suppressing sounds from other directions. Using the Listening in Spatialized Noise - Sentences test (LiSN-S) with a built-in prescribed gain amplifier we measured spatial processing ability under headphones for 80 participants, aged 7 to 89 years, with hearing thresholds ranging from within normal limits to a moderately-severe sensorineural hearing loss. The effect of age, cognition and degree of hearing loss on spatial processing ability were investigated as well as the relationship between spatial processing ability and self-reported difficulties hearing in noise. Thus, in addition to the LiSN-S, participants 18 years of age and older completed the Neurobehavioral Cognitive Status Examination (COGNISTAT) and the Speech, Spatial & Qualities of Hearing questionnaire (SSQ). Spatial processing ability, as measured by the spatial advantage measure of the LiSN-S, was found to be significantly correlated with hearing loss (p<0.001, partial r² = 0.66) but not significantly correlated with age (p = 0.10, partial r² = 0.06) nor with results on the COGNISTAT (p = 0.7, partial r² = 0.00). These results suggest that aging itself does not affect spatial processing ability however, given most older adults have some degree of hearing loss, they are still likely to have poorer spatial processing ability than normal hearers.

Presenter: Helen Glyde, University of Queensland
Co-Authors: Sharon Cameron; Harvey Dillon; Louise Hickson; Mark Seeto

Back to Top >


Day: Tuesday     Poster Number: 47
Title: What makes a good talker? Perceptual ratings by younger and older listeners of speech and voice samples produced by younger and older talkers

Abstract:
The perceptual evaluation of speech and voice quality is typically conducted for clinical reasons to identify and describe problems. However, little research is focused on characteristics of "normal" voices that make them sound pleasant or easier to understand, and which characteristics of speech and voice are important to listeners who are not clinicians. The primary objective of the current study is to present speech and vowel samples from a large number of younger and older talkers to listeners who do not have clinical training, to determine which acoustic properties of speech and voice lead listeners to rate voices as being 'good' or 'poor'. Preliminary data indicate that younger listeners perceive speech to be more pleasant if talkers speak more slowly with a larger F0 variation, and younger listeners judge the speech of younger talkers to be more pleasant than the speech of older talkers. Data are also being gathered from a sample of older adults to determine if older listeners use the same criteria as younger listeners for evaluating speech and voice quality.

Presenter: Huiwen Goy, University of Toronto
Co-Authors: Kathy Pichora-Fuller; Pascal van Lieshout

Back to Top >


Day: Tuesday     Poster Number: 48
Title: Eyetracking in competing speech perception: Older and younger adults

Abstract:
Many listening situations require individuals to attend to one message while ignoring other background talkers. Although age-related changes in speech perception in the presence of competition are well documented, less is known about how aging influences the ability to ignore interfering speech. In the present study, eye movements were recorded during three listening conditions: a single sentence presented in quiet; one sentence presented with steady-state noise; and two equal-level sentences presented concurrently. The ear receiving the to-be-attended sentence was selected randomly from trial to trial and the competing noise or speech was presented to the other ear. Participants (younger and older adults) were told to listen for the sentence beginning with "Theo" (the target sentence) and to ignore the other sentence (the distracter sentence, which began with another proper name) or the noise. A computer screen showed two words on each trial. One word was the first or second key word from the target sentence. In the quiet and noise conditions, the other word was a random word that was never heard during the experiment. In half of the competing speech trials, the second visually-presented word was from the distracter sentence; a random word was displayed in the other half of these trials. Participants were instructed to click on the word heard in the target sentence. To investigate lexical activation of words in to-be-attended and to-be-ignored speech streams, analysis of eye movements focused on the time course of fixations on each visually-displayed word, time-locked to the onset of each word in the auditory stimuli. This poster will describe differences in performance between the two groups of listeners in terms of both the ability to understand the target speech (i.e., percent-correct performance) as well as listeners' subconscious processing of the competing speech message (that is, eye movement patterns). Preliminary analysis of the data reveals significant group differences in performance in terms of accuracy of word identification and reaction time in the competing speech condition, but not in quiet or in the presence of noise. Our initial analysis also shows that, as compared to younger participants, older adults made more looks to the non-target word shown on the screen during the competing speech condition. These results suggest that, compared to younger listeners, older adults are more negatively affected by a competing signal that is speech.

Presenter: Karen Helfer, University of Massachusetts
Co-Authors: Adrian Staub

Back to Top >


Day: Tuesday     Poster Number: 49
Title: A role for modulation sensitivity in age-related declines in speech understanding?

Abstract:
Advancing age can lead to an impaired representation of temporal information in the auditory system as well as difficulty understanding speech. Both effects are highly variable across individuals. Because speech information is conveyed at least in part by temporal cues, we predicted a relationship between basic sensitivity to temporal information and use of those cues for speech understanding. Listeners aged 28 to 83 with hearing ranging from normal to moderate sensorineural loss were tested on their ability to understand sentences using primarily temporal cues. To encourage listeners to rely on temporal cues, low-context sentences were filtered into four or six channels, each of which was then sine-vocoded using envelope smoothing filter cutoffs of 40, 80, 160, or 320 Hz. The ability of each listener to detect the presence and discriminate the rate of amplitude modulation was measured using a broadband noise carrier sinusoidally modulated at rates of 40, 80, 160 and 320 Hz. Results show that older listeners performed more poorly in the speech task than did younger listeners. Detection and discrimination of amplitude modulation were both unrelated to age or speech understanding. These results suggest that factors beyond basic sensitivity to temporal information govern the effect of aging on speech. [Work supported by NIH and the VA]

Presenter: Eric Hoover, Northwestern University
Co-Authors: Pamela Souza; Frederick Gallun

Back to Top >


Day: Tuesday     Poster Number: 50
Title: Dual-task audition and balance performance: Effects of training and hearing impairment

Abstract:
Comprehension of speech often occurs while the body is in motion. A growing body of evidence indicates that walking and balancing require more cognitive control processes with aging. Similarly, normal hearing older adults appear to utilize higher level cognitive processes to compensate for sensory declines. This reliance on higher level processes for both auditory and motor tasks may therefore result in resource competition (dual-task costs) during auditory-motor situations. In the current study, we compared 12 older adults with self-reported hearing impairment (SRHI), and 12 Controls. None of the participants reported vestibular impairments. We expected that the SRHI group would show greater dual-task costs than the Control group. All participants performed an auditory working memory task (1-back), single-support standing balance, and both tasks concurrently. Following 12 sessions of combined fitness and cognitive training, the same auditory and balance tests were repeated. Single-support postural stability (mediolateral center of pressure variability) was significantly worse for the SRHI than the Control group in the pre-training phase (p < .05). In the post-training phase, both groups were comparably stable, similar to the Control group at pre-training. Dual-task balance was marginally worse for the SRHI group at pre-training and did not improve significantly with training, whereas the Control group did (p < .05). These results inform models of sensory aging and multiple-task performance, and imply a greater risk of imbalance for older adults with hearing impairment. (Funding from the Canadian Institutes of Health Research)

Presenter: Karen Li, Concordia University
Co-Authors: Sarah Fraser; Chantal Joly; Kiran Vadaga; Louis Bherer

Back to Top >


Day: Tuesday     Poster Number: 51
Title: Physical Fitness Training for Persons with Aphasia

Abstract:
Research demonstrates a positive connection between physical fitness training and cognition. The current study aimed to identify improvements in language and cognition in persons with chronic aphasia following a six-week exercise protocol. The study employed a controlled single subject experimental design with five participants with chronic aphasia. The protocol involved a 6-week treatment condition (i.e., exercise sessions), and a 6-week no treatment condition (i.e., conversation sessions). Language and cognition measures were taken pre-, post, and between conditions as well as at a 6-week follow-up. Exercise sessions consisted of 30-minutes of light aerobic exercise on a recumbent bike under a personal trainer's supervision, and concluded with probe tasks. There was overall language improvement (i.e., increased WAB scores) after each condition for most participants, with greater improvement following the exercise condition for 3 of 5 participants. Fluency tasks also revealed greater improvements following the exercise versus conversation condition. Analysis of oral discourse samples revealed a greater increase in correct information units following the exercise condition for 4 of 5 participants. Analysis of written discourse samples revealed a dramatic increase in total number of words for 4 of 5 participants, as well as an increase in complete sentences produced. Improvements were also observed on tests of cognition (i.e., Flanker task and trail making task) by most participants, with greater improvements following the exercise condition. The low attrition rate indicates that exercise with a stationary, recumbent bike is a low-stress, feasible option for this patient population. Furthermore, most participants in the study demonstrated improvements on a variety of language and cognitive tests, with greater increases observed during the exercise condition as compared to the conversation condition. These findings are commensurate with growing research indicating the beneficial effect of physical fitness training, and extend the current research to include the clinical aphasia population.

Presenter: Bonnie Lorenzen, Indiana University
Co-Authors: Laura Murray

Back to Top >


Day: Tuesday     Poster Number: 52
Title: Time window of speech recognition in silent and noisy conditions: A gating paradigm study

Abstract:
The aims of the present study were to measure consonants, words, and sentence identifications in silent and noisy conditions by using a gating paradigm method (Grosjean, 1980) and also investigate the contribution of cognitive and auditory variables in speech perception in both silent and noisy conditions. Twenty-one college students participated in this study. Gated consonants, words and sentences (High-predictability sentences and Low-predictability sentences) in silent and noisy (0 dB) situations, showed that noise significantly increases the time window required for auditory materials.The effects of noise depended highly on the nature of stimuli. For example, noise had less adversly effect on high-predicatbility sentences than low-predictability sentence or words a lone. Effects of noise on consonants divided them on three categories: /t/, /k/, /s/, /j/, and /p/ sequentially needed least time to had correctly identification, /v/, /n/, /ng/, /l/, /f/, /b/, /d/, and /sj/ consecutively were at the intermediate level for correct identification, and consonants of /g/, /h/, /r/, and /rt/ needed the highest time to correctly identification. The results also showed that in noisy condition, there were significant correlation among cognitive measures (working memory and attentional tests), Hearing in Noise Test (HINT), consonant identification in noise and word identification in noise. These findings support ELU model (Rönnberg, 2008).

Presenter: Shahram Moradi, IBL, Linköping University
Co-Authors: Jerker Rönnberg; Björn Lidestam

Back to Top >


Day: Tuesday     Poster Number: 53
Title: Evaluation of the Active Communication Education (ACE) program in two Swedish samples

Abstract:
The ACE program (Hickson et al. 2007) is a problem-solving interactive program. The primary aim of the ACE program is to reduce communication difficulties in everyday life. The ACE program consists of five weekly two-hour sessions with six to ten participants. The program is designed for people with hearing losses, with or without hearing aids, and their significant others. The ACE program was translated and evaluated for two Swedish populations, people aged 87 year (n=23) and a population with a mean age of 66 year (n=25). Outcomes measured pre-post the program were communication strategy use, activity and participation, health-related quality of life and depression. Statistically significant differences for the pre-post measurements were found for communication strategy use for the younger population, whereas no statistically differences were found for the older population. Post program evaluations indicated both populations to found the program beneficial.

Presenter: Marie Öberg, Division of Technical Audiology
Co-Authors: Therese Bohn; Ulrika Larsson; Louise Hickson

Back to Top >


Day: Tuesday     Poster Number: 54
Title: Musical experience offsets age-related declines in neural timing

Abstract:
Aging disrupts neural timing reducing the nervous system's ability to precisely encode sound. Indeed, the age-related decline in neural precision is thought contribute to the deficits observed in speech understanding in older adults. Given that the neural representation of temporal features is strengthened with musical training in young adults, we asked whether musical experience would offset age-related declines in neural timing. Subcortical encoding of speech in younger (18-30 years) and older (45-65 years) groups of normal hearing musicians and nonmusicians was examined. Neural response timing in older nonmusicians was delayed relative to younger nonmusicians. Older musicians, however, did not demonstrate this shift and had equivalent neural response timing to younger musicians. As such, we provide the first evidence that life-long musical experience offsets the negative effects of aging on subcortical encoding of sound. (Work supported by NIH-NIDCD R01-DC010016; NSF 0842376 & 1057556)

Presenter: Alexandra Parbery-Clark, Northwestern University
Co-Authors: Nina Kraus

Back to Top >


Day: Tuesday     Poster Number: 55
Title: Perceptions of age and brain in people with hearing impairment in regards to hearing healthcare seeking and rehabilitation

Abstract:
A qualitative analysis was performed to gather the perspectives of adults with hearing impairment on hearing healthcare seeking, rehabilitation in general, and hearing aid use (or non-use) in particular. In-depth semi-structured interviews were completed in 4 countries with 34 adults with hearing impairment (mean age = 67.4 years, SD = 13.6). Participants in different stages of the hearing healthcare process were asked to explain "things that you have done, or not done, about your hearing difficulties." Via qualitative content analysis the themes of Age and Brain emerged from the data. These superordinate themes were analyzed further via interpretative phenomenology to create models of themes and sub-themes. The age data sorted into three themes and eight sub-themes (in parentheses): Expectations (Brain Plasticity, Hearing Aid Benefit, Acceptance); Stigma (Self-Image, Ageism, Humor); Ways of Coping (Priorities, Relationships). The brain data sorted into two themes and nine sub-themes: Cognitive Operations (Concentration, Effort, Processing, Memory); Brain Plasticity (Training, Filtering, Passive Plasticity, Negative Plasticity, Hearing Aid Benefit). Adults with hearing impairment thought of their age and their brain as factors which contributed to their hearing impairment, speech communication, and benefit from hearing healthcare and hearing aids. The brain was discussed in terms of the cognitive operations which may either inhibit or improve speech communication. Participants believed that cognitive decline which accompanies older age may limit hearing aid benefit. Hearing healthcare providers may wish to dispel negative messages about age and the brain as older brains are capable of responding to training to take advantage of a newly amplified signal. (Work supported by the Oticon Foundation and carried out in collaboration with Eriksholm Research Centre.)

Presenter: Jill Preminger, University of Louisville
Co-Authors: Ariane Laplante-Lévesque; Sophia Kramer

Back to Top >


Day: Tuesday     Poster Number: 56
Title: Temporal Integration of Gated and Temporally-Altered Speech in Older and Hearing Impaired Listeners.

Abstract:
The effect of age and hearing loss on temporal integration of interrupted speech was investigated using gating, time-compression and time-expansion. In the control condition that preserved the original speech duration and temporal structure, HINT sentences were gated at 0.5, 1, 2, 4, 8, or 16 Hz, using a 50% and 75% duty cycle. In the two experimental conditions, previously gated stimuli were either time-compressed by concatenating consecutive speech segments or time-expanded by doubling the silent intervals between consecutive segments. Across rates, these manipulations thus varied both the size of intact speech intervals and the duration of silence between the intervals. Results indicate a similar overall performance of older normal-hearing (NH) and hearing-impaired listeners for all three interruption methods. For each duty cycle, gated sentences were overall more intelligible than either time-compressed or time-expanded sentences. Intelligibility of gated and both time-altered conditions was always similar at the lower and higher rates. However, at mid-rates of 2-4 Hz local minima were observed for the two time-altered conditions. Compared with the previous results for young NH adults (Shafiro et al., JASA-EL, in press), these findings indicate that age and hearing loss have a minor influence on the rate-dependent intelligibility variation for interrupted speech. [Work supported by NIH/NIDCD]

Presenter: Valeriy Shafiro, Rush University Medical Center
Co-Authors: Stanley Sheft; Robert Risley

Back to Top >


Day: Tuesday     Poster Number: 57
Title: Reliability of the Abbreviated Nonsense Syllable Test

Abstract:
The Nonsense Syllable Test (NST) is a standardized speech-identification test involving closed-set identification of nonsense syllables. While the organization of the test permits detailed analyses of consonant confusions, its usefulness is limited by the long time it takes to administer. An abbreviated version of the NST was developed by Gelfand et al. (1992). They reported data on young normal-hearing listeners at presentation levels ranging from 20 dB to 52 dB SPL in quiet and in cafeteria babble at 5 SNR. The present study was aimed at replicating the work of Gelfand et al. (1992) at 52 dB SPL and at extending it to a higher presentation level (90 dB SPL) more suited for older and/or hearing impaired listeners. The present study was also aimed at determining test-retest reliability statistics for this abbreviated version of the NST in a group of young normal-hearing listeners at 90 dB SPL in the presence of varying levels of cafeteria babble. Syllable identification scores were measured on two separate sessions within 1-3 days of each other. Statistical analyses revealed no significant differences between test and retest mean scores. Results also showed significant correlations between test and retest scores for the 90 dB SPL condition. The results indicate that this abbreviated version of the NST is a potentially useful measure of nonsense syllable identification at higher presentation levels. The advantage of this shortened version of the NST is that it continues to allow detailed analyses of consonant confusions while reducing the test time greatly.

Presenter: Mini Shrivastav, University of Florida
Co-Authors: David Eddins

Back to Top >


Day: Tuesday     Poster Number: 58
Title: Development of a clinically feasible auditory word-span memory measure

Abstract:
This study describes the results of an auditory word (monosyllabic) span measure that was developed to test auditory working memory and word recognition simultaneously. Younger listeners (n = 24) with normal hearing completed the word span test in quiet using three different processing tasks [categorization judgments based on whether the word (1) referred to an object vs. non-object, (2) began with a letter from the first vs. second half of the alphabet, or (3) no processing task]. The participants also completed digit span tasks (forward, backward, sequencing), auditory and visual free recall, vocabulary, and word-recognition in noise testing. There was not a significant difference between the word span score for the object and alphabet categorization tasks; however, both were significantly lower than the word span score when there was no processing task. There was a significant and moderate correlation among word span scores and word recognition performance in noise. There was no significant correlation among the word spans (object or alphabet) and digit span, auditory and visual free recall, or vocabulary. Word span with no processing task was significantly and moderately correlated only with forward digit span and vocabulary. Importantly, word-recognition-in-noise performance was moderately related to the word span measure with an additional processing load imposed by a categorization task whereas other memory measures such as digit span or auditory or visual free recall are not. The newly-developed auditory word span measure may allow audiologists to obtain auditory working memory and word recognition information from patients using a single test. This work was supported by a Veterans Affairs Rehabilitation Research and Development (VA RR&D) Career Development Award to the first author (#C6394W; Drs. Pichora-Fuller and Wilson serve as research mentors) and by the VA RR&D Auditory and Vestibular Dysfunction Research Enhancement Award Program to the first and third authors (#C4339F).

Presenter: Sherri Smith, Mountain Home TN VA Medical Center
Co-Authors: M. Kathleen Pichora-Fuller; Richard Wilson; Kelly Watts

Back to Top >


Day: Tuesday     Poster Number: 59
Title: Multisensory Integration in Adult Cochlear Implant Users: Temporal Factors

Abstract:
Multisensory integration depends on the temporal structure of sensory inputs. A construct for these effects is the temporal binding window (TBW), within which stimuli tend to be perceptually bound. Here we explore the TBW in cochlear implant (CI) users, how it differs from typical listeners, how these differences correlate with the integrative abilities of CI users, and how they relate to the proficiency of CI users. The audiovisual TBW was measured via simultaneity judgment and temporal order judgment tasks in which participants were presented auditory (A), visual (V), and audiovisual (AV) stimuli, including simple flashes of light and beeps and speech. Stimuli were presented at SOAs from -400 to 400 ms (positive = V first). Typical and CI listeners exhibited a wider TBW with speech than with simple stimuli, and were equally likely to perceived synchronous stimuli as simultaneous. Relative to typical listeners, CI users showed: wider A and AV TBWs, lower perception of the McGurk effect, and a lower perception of the SIFI. These results suggest CI users had less precise temporal integration and visual processing, and that auditory presentations had less influence on their auditory perception. CI users were split according to CNC scores into proficient and non-proficient users. Relative to non-proficient users, proficient users showed: narrower TBW with A, V, and AV stimuli, and increased McGurk scores. These data suggest that there is a relationship between temporal sensory processing, the ability of CI users to integrate, and CI proficiency. Here, narrower the TBW, the more proficient the CI user and the higher the rate of integration, as measured through the McGurk effect. These data show significant differences between CI and typical hearing individuals in the width of their TBW measured in a variety of ways, suggesting striking differences in the manner with which CI users combine visual information with the auditory signals provided by their implant, and which have implications for the manner in which these individuals interact with the world. Such results suggest that future work in CI rehabilitation may employ auditory and multisensory-based training methods in an effort to narrow the TBW.

Presenter: Ryan Stevenson, Indiana University
Co-Authors: Brannon Mangus; Juliane Krueger Fister; Justin Siemann; Andrea Hedley-Williams; Robert Labadie; Mark Wallace

Back to Top >


Day: Tuesday     Poster Number: 60
Title: Effects of Age and Hearing Loss on Working Memory Capacity and Speech Recognition: Some New Findings with PRESTO

Abstract:
This study investigated the effects of age and hearing loss in working memory and speech perception. Sixteen adults (aged 21 to 64 years) were categorized as having normal hearing (N= 8), asymmetric (N= 4) or bilateral hearing loss (N= 4). The degree of hearing loss was determined by the 4-frequency pure tone average of the worst ear at 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz. Two sentence tests, Hearing in Noise Test (HINT), and a new perceptually robust, high-variability sentence test (PRESTO), were used to test speech perception in quiet and noisy environments. Three immediate memory capacity tests, digit span, spatial span, and word span were used to assess short-term and working memory. Across all 16 participants, speech perception accuracy was greater, but less variable, on HINT (Quiet: M = 99.73%, SD = 0.58%; Noise: M = 69.41%, SD = 14.62%) than on PRESTO (Quiet: M = 93.91%, SD = 7.87%; Noise: M = 56.87%, SD = 17.55%). PRESTO accuracy, but not HINT accuracy, was significantly correlated (Quiet: r=.55, p<.05; Noise: r=.50, p<.05) with word span scores. Data suggested that aging was correlated with poorer accuracy on PRESTO in noise (r=-.52, p<.05) and poorer word span scores (r=-.54, p<.05). An increased hearing loss was correlated (r=-.53, p<.05) with poorer forward digit span scores. Similarities and differences from this general pattern, related to presence and type of hearing impairment, will be discussed. (Research supported by NIDCD Training Grant T32-DC00012 and NIDCD Research Grant R01-DC000111 to Indiana University)

Presenter: Sushma Tatineni, Indiana University
Co-Authors: Jaimie Gilbert; David Pisoni

Back to Top >


Day: Tuesday     Poster Number: 61
Title: Neurophysiological measurement of active auditory segregation in young , middle-aged and older brains.

Abstract:
When an interaural phase difference (IPD) or frequency inharmonicity is introduced in one component of a harmonic complex tone, we perceive this component as distinct from the remainder of the complex tone. This ability to segregate concurrent sounds is of paramount importance when listening to speech in adverse listening conditions. Adults report greater difficulties in these listening situations as they age. We recorded auditory cortical evoked potentials (EPs) in young (n=10, age range 20-35 years), middle-aged (n=9; age range 47-58 years) and older (n=7, range 60-75 years) individuals with normal hearing. The stimulus was a 5-tone harmonic complex tone (3 seconds) with a 250 Hz fundamental frequency, amplitude enveloped at a 40 Hz rate. Participants were asked to identify whether they heard one or two sounds in the complex tone. For the stimuli containing the second sound, an IPD (180 degrees) and/or frequency mistuning (4 or 16%) was introduced in the second harmonic at 1500 ms after stimulus onset. Mixed ANOVAs were conducted on the factor scores of a temporal PCA. Neurobiological aging was observed for middle and older aged listeners in the P2 component, when the auditory object was parsed using two cues of auditory stream segregation (IPDs and mistuning), but not when only one cue was present for auditory segregation. These findings support previous reports of aging in binaural hearing in midlife and at older age and document the impact of aging on auditory segregation. [Parts of this work supported by the Deafness Research Foundation].

Presenter: Ilse Wambacq, Montclair State University
Co-Authors: Ryanlynn Pachter; Janet Koehnke; Martha Ann Ellis; Joan Besing; Ann Marie De Pierro

Back to Top >


Day: Tuesday     Poster Number: 62
Title: Validation of a telephone-administered screening test for hearing impairment for use in the US.

Abstract:
During the past eight years, telephone screening tests for hearing impairment have been implemented in seven countries outside the US. Each of these tests has been based on a method originally developed in the Netherlands by Smits and colleagues at the VU Medical University, Amsterdam. The tests employ spoken three-digit sequences presented in a noise background. The speech-to-noise ratios of the stimuli are determined by an adaptive tracking method that converges on the level required to achieve fifty percent correct recognition. The stimuli for these tests are presented in the language or dialect appropriate to each country, and efforts have generally been made to ensure that the recorded sequences are equally identifiable. A US version of this test has been developed in a collaborative effort between Communication Disorders Technology, Indiana University, and VU University, Amsterdam. This poster reports data from a clinical validation of this test conducted in the Hearing Clinic at Indiana University, and partial data from a second project in which the test was administered to a large sample of listeners tested in three audiology clinics of the US Department of Veterans Affairs. In each of these studies, the US version of the telephone-administered test was found to correlate with other audiometric measures including pure tone and speech tests, at levels sufficient to support its use as a screening test for hearing impairment. (Supported in part by NIH/NIDCD Grant 1R43DC009719, Communication Disorders Technology, Inc., Indiana University, and the US Department of Veterans Affairs.)

Presenter: Charles Watson, Indiana University
Co-Authors: Gary Kidd; Larry Humes; Cas Smits; Rachel McArdle; Andrea Bourne; Richard Wilson

Back to Top >


Day: Tuesday     Poster Number: 63
Title: Behavioral and neural correlates of speech-in-noise perception in older adult listeners

Abstract:
Event-related potentials (ERPs) can provide insight into the dynamic time course of auditory stimulus processing. Friedrich and Kotz (2007) investigated the timeline of integrating top-down and bottom-up speech cues among young adults using sentences presented in quiet. They interpreted their findings to suggest that phonological and semantic neural representations are activated early on and are integrated at a later stage during sentence comprehension. A follow-up study, also with young adults (Maxfield et al., in progress), used sentences embedded in background noise and found similar results; but, with the use of temporal-spatial principal component analysis unveiled additional effects of phonological and semantic integration of spoken sentences. These past studies provide evidence of the time course of speech perception among young adult listeners; however, it is unknown if a similar pattern would be present within an older adult sample. In the present study, we aimed to elucidate the time course, and behavioral and neural correlates of speech-in-noise processing in older adults (mean age = 63 yrs) with near normal hearing. Older adults often report difficulty understanding speech in the presence of background noise. Degradation in peripheral and central auditory processing along with age-related cognitive decline has been hypothesized as reasons why older adults struggle in the presence of noise. We investigated the integration of top-down and bottom-cues from contextually rich sentences presented in background noise. At the end of each sentence, participants were shown a printed probe word that matched the sentence final word 1) completely (match), 2) in meaning (semantic match), 3) initial consonant (or consonant cluster) and vowel (phonological mismatch), or 4) non-related (mismatch). While on-going electroencephalography was recorded, participants were asked to report the relatedness of the probe word to the entire sentence on a 4-point likert scale. Data analysis of 18 older adult participants is in progress. Behavioral responses (likert scale judgments and reaction times) will be reported. Time-locked ERPs to the presentation of the printed probe word collected from a 64-electrode montage and processed using spatial-temporal component analysis will be reported. Comparisons will be made with respect to the present findings to those obtained from young listeners in noise (Maxfield et al., in progress) and in quiet (Friedrich and Kotz, 2007).

Presenter: Victoria Williams, University of South Florida & Bay Pines Veterans Affair Health Care System
Co-Authors: Nathan Maxfield; Jennifer Lister; Jennifer O'Brien

Back to Top >



Website maintained by: Indiana University Conferences
Web inquires: Indiana University Conferences
Phone: 800.933.9330 (within US) or 812.855.4661
Last updated: November 8, 2011
Photos courtesy of Indiana University