17.05.2014 Views

FRIDAY MORNING, 20 MAY 2005 REGENCY E, 8:30 A.M. TO 12:00 ...

FRIDAY MORNING, 20 MAY 2005 REGENCY E, 8:30 A.M. TO 12:00 ...

FRIDAY MORNING, 20 MAY 2005 REGENCY E, 8:30 A.M. TO 12:00 ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

and hard words, will be described. Work supported, in part, by a research<br />

grant from NIA, R01-AG08293, awarded to the second author, and an<br />

NIH training grant.<br />

5aSC3. Comparison of speech intelligibility measures. Jacqueline S.<br />

Laures and Gary G. Weismer Georgia State Univ., Atlanta, GA <strong>30</strong><strong>30</strong>2<br />

and Univ. of Wisconsin-Madison, Madison, WI<br />

The speech intelligibility of dysarthric speakers is perceptually measured<br />

by one of the following four techniques: direct magnitude estimation<br />

with a modulus, free modulus magnitude estimation, interval scaling, and<br />

transcription. Weismer and Laures <strong>20</strong>02 suggest that magnitude estimates<br />

may provide a more complete representation of speech intelligibility<br />

than other methods of measurement because it may be more sensitive to<br />

non-segmental aspects of speech, such as prosody. However, the empirical<br />

data supporting such a statement is quite limited. The purpose of the<br />

current study is to explore the relationship of the four different measurement<br />

techniques to determine if one approach may provide a more accurate<br />

determination of the speech intelligibility of dysarthric speakers.<br />

Twelve listeners measured the speech of six dysarthric speakers and two<br />

healthy control speakers using the four different measurement techniques.<br />

Each speaker produced three sentences twice. The sentences were presented<br />

via a loudspeaker in a sound attenuated booth. Listeners rated the<br />

sentences using the four techniques. The order of techniques used was<br />

counterbalanced. A correlation analysis revealed that the four techniques<br />

were highly related. Implications of this finding are discussed.<br />

5aSC4. Effects of speech-rate and pause duration on sentence<br />

intelligibility in younger and older normal-hearing listeners. Akihiro<br />

Tanaka, Shuichi Sakamoto, and Yô-iti Suzuki R.I.E.C., Tohoku Univ.,<br />

Katahira 2-1-1, Aoba-ku, Sendai 980-8577, Japan<br />

Speech-rate conversion techniques aid speech comprehension by allowing<br />

more time for perceptual and cognitive processes. However, if only<br />

the speech-rate of a telecast is converted, auditory and visual information<br />

become asynchronous. One possible method to resolve the problem is to<br />

reduce the pause durations between phrases; unfortunately, this can evoke<br />

a marked negative effect. For that reason, the present study examines the<br />

effects of the speech-rate and pause duration on sentence intelligibility. We<br />

manipulated the lengths of phrases relative to the original length 0,<br />

1<strong>00</strong>, <strong>20</strong>0, <strong>30</strong>0, and 4<strong>00</strong> ms), and the pause durations between<br />

phrases in a sentence 0, 1<strong>00</strong>, <strong>20</strong>0, <strong>30</strong>0, and 4<strong>00</strong> ms. Listeners were asked<br />

to write down sentences they discerned from the noise. The intelligibility<br />

score increased in younger and older listeners when the speech signal was<br />

expanded. Regarding the pause duration, intelligibility was best when the<br />

pause duration was <strong>20</strong>0 ms in younger listeners; in older listeners, the<br />

intelligibility score was highest when the pause durations were <strong>20</strong>0 ms and<br />

4<strong>00</strong> ms. These results provide evidence that might benefit speech-rate<br />

conversion through better use of pause duration.<br />

5aSC5. Simulation of temporal aspects of auditory aging. Ewen<br />

MacDonald Inst. of Biomaterials and Biomed. Eng., Rm 407 Rosebrugh,<br />

Univ. of Toronto, Toronto, ON, Canada M5S 3G9, macdone@<br />

ecf.utoronto.ca, Kathy Pichora-Fuller, and Bruce Schneider Univ. of<br />

Toronto at Mississauga UTM, Mississauga, ON, Canada L5L 1C6<br />

A jittering technique to disrupt the periodicity of the signal was used to<br />

simulate the effect of the loss of temporal synchrony coding believed to<br />

characterize auditory aging. In one experiment jittering was used to distort<br />

the frequency components below 1.2 kHz and in a second experiment the<br />

components above 1.2 kHz were distorted. To control for spectral distortion<br />

introduced by jittering, comparison conditions were created using a<br />

smearing technique Baer and Moore, 1993. In both experiments, 16 normal<br />

hearing young adult subjects were presented with SPIN sentences in<br />

three conditions intact, jittered, and smeared at 0 and 8 dB SNR. When<br />

the low frequencies were distorted, speech intelligibility in the jittered<br />

conditions was significantly worse than in the intact and smeared conditions,<br />

but the smeared and intact conditions were equivalent. When the<br />

high frequencies were distorted, speech intelligibility was reduced similarly<br />

by jittering and smearing. On low-context jittered sentences, results<br />

for young adults mimicked results found previously for older listeners<br />

with good audiograms Pichora-Fuller et al., 1995. It is argued that the<br />

jittering technique could be used to simulate the loss of neural synchrony<br />

associated with age-related changes in temporal auditory processing.<br />

5aSC6. Comparison of hearing loss compensation algorithms using<br />

speech intelligibility measures. Meena Ramani Dept. of Elec. and<br />

Comput. Eng., Univ. of Florida, P.O. Box 1161<strong>30</strong>, Bldg. 33, Ctr. Dr. Rm.<br />

NEB 444, Gainesville, FL 32611, John G. Harris Univ. of Florida,<br />

Gainesville, FL 32611, Alice E. Holmes Univ. of Florida, Gainesville,<br />

FL 32611, Mark Skowronski Univ. of Florida, Gainesville, FL 32611,<br />

and Sharon E. Powell Univ. of Florida, Gainesville, FL 32611<br />

Sensorineural hearing loss includes loss of high-frequency sensitivity<br />

which results in decreased speech intelligibility. The loss cannot be compensated<br />

by inverting the audiogram because of the non-linear effects of<br />

sensorineural hearing loss frequency smearing, decreased dynamic range,<br />

decreased time-frequency resolution. Several non-linear compensation<br />

schemes exist Half-gain, POGO, NAL-R, Fig. 6, DSL and LGOB and<br />

this paper provides a comparison of those using the objective Perceptual<br />

Evaluation of Subjective Quality PESQ score and the subjective Hearing<br />

In Noise Test HINT. The listening tests were run on 15 unaided hearing<br />

impaired listeners as well as 15 normal hearing listeners using a simulated<br />

hearing loss algorithm. These results show marked improvement in intelligibility<br />

for the compensated speech over the normal speech for both<br />

normal and hearing impaired adults.<br />

5aSC7. A comparative study of perceived, predicted, and measured<br />

speech intelligibility. Michael E. Hermes, Melinda J. Carney, and<br />

Dominique J. Cheenne Dept. of Audio Arts & Acoust., Columbia<br />

College Chicago, Chicago, IL 60605<br />

Intelligibility metrics were obtained using a variety of methods in a<br />

gymnasium that serves as a place of worship. A word list trial, a computer<br />

model, and a computer-based %Alcons test provided the data. The results<br />

were compared in order to gauge their relative accuracy. The data from the<br />

%Alcons testing were found to be unreliable, but a direct relationship was<br />

established between the mean word list test scores and the results gathered<br />

from the computer model. This relationship allowed for a translation of the<br />

scores to %Alcons.<br />

5aSC8. A statistical model for prediction of functional hearing<br />

abilities in real-world noise environments. Sigfrid Soli House Ear<br />

Inst., 21<strong>00</strong> W. 3rd St., Los Angeles, CA 9<strong>00</strong>57, Chantal Laroche,<br />

Christian Giguère, and Véronique Vaillancourt Univ. of Ottawa, Ottawa,<br />

ON, Canada<br />

Many tasks require functional hearing abilities such as speech communication,<br />

sound localization, and sound detection, and are performed in<br />

challenging noisy environments. Individuals who must perform these tasks<br />

and whose functional hearing abilities are impaired by hearing loss may<br />

constitute safety risks to themselves and others. We have developed and<br />

validated in two languages American English and Canadian French statistical<br />

techniques based on Plomps 1986 speech reception threshold<br />

model of speech communication handicap. These techniques predict functional<br />

hearing ability using the statistical characteristics of the real-world<br />

noise environments where the tasks are performed together with the communication<br />

task parameters. The techniques will be used by the Department<br />

of Fisheries and Oceans Canada to screen individuals who are required<br />

to perform hearing-critical public safety tasks. This presentation<br />

will summarize the three years of field and laboratory work culminating in<br />

the implementation of the model. Emphases will be placed on the methods<br />

2604 J. Acoust. Soc. Am., Vol. 117, No. 4, Pt. 2, April <strong>20</strong>05 149th Meeting: Acoustical Society of America 2604

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!