FRIDAY MORNING, 20 MAY 2005 REGENCY E, 8:30 A.M. TO 12:00 ...

FRIDAY MORNING, 20 MAY 2005 REGENCY E, 8:30 A.M. TO 12:00 ... FRIDAY MORNING, 20 MAY 2005 REGENCY E, 8:30 A.M. TO 12:00 ...

pdn.cam.ac.uk
from pdn.cam.ac.uk More from this publisher
17.05.2014 Views

5aSC25. Effect of two-band dichotic listening for hearing impaired listeners. Shuichi Sakamoto, Atsunobu Murase, Yôiti Suzuki Res. Inst. of Elect. Comm./ Grad. School of Information Sci., Tohoku Univ., 2-1-1 Katahira, Aoba-ku, Sendai, Miyagi, Japan, saka@ais.riec.tohoku.ac.jp, Tetsuaki Kawase, and Toshimitsu Kobayashi Tohoku Univ., Aoba-ku, Sendai, Miyagi, Japan The increase of the upward spread of masking is a phenomenon that is typically observed among sensorineural hearing-impaired listeners. To resolve this problem, dichotic listening, by which an input speech spectrum is split into two complementary parts and is presented dichotically, seems effective to reduce masking between contiguous frequency bands. This study examines effects of simple two-band dichotic listening with a cut-off frequency around and between the typical first and second formant frequencies of the preceding vowel. We measured speech intelligibilities in both quiet and noisy environments S/N4and0dB. Three types of vowel-consonant-vowel nonsense monosyllables, of which preceding vowels were /a/, /i/, and /u/, were used as speech stimuli. Results showed that this dichotic processing was effective, especially in relatively high S/N conditions. Moreover, the best dividing frequency was dependent on the preceding vowel. When /a/-consonant-vowel was used, the best dividing frequency was 1.0 kHz around F1 of Japanese vowel /a/, whereas the best dividing frequency was 0.8 kHz between F1 and F2 of Japanese vowel /u/ when the /u/-consonant-vowel was used. 5aSC26. Signal to noise ratio loss and consonant confusions. Yangsoo Yoon and Jont B. Allen Univ. of Illinois, Speech and Hearing, 901 s sixth, Champaign, IL 61820, yyoon5@uiuc.edu Previous SNR loss also called speech loss studies showed that 1 SNR loss cannot be predicted from audiometric measures, 2 40% of hearing aids wearers have 5 dB SNR loss or greater, and 3 SNR loss influences speech intelligibility significantly. These showed SNR loss to be important in speech recognition, but they do little, or no to illuminate the nature of consonant confusion, resulting from SNR loss. Thus, the goal of the current study was to investigate the effect of SNR loss on 16 consonants recognition in hearing impairment as a function of SNR. Confusion matrix data were collect and analyzed, and Fletcher’s AI was calculated from the SNR. These two measures were utilized 1 to determine how SNR loss was related to the event loss, 2 to test whether clustering of syllables in terms of consonant confusions was complied with SNR loss, and 3 to compare PI functions obtained from subjects and AI model. The results show that the degree of consonant confusion varies, but members of consonants confused with target sound above chance level are similar, as a function of SNR loss and SNR. It suggests that SNR loss limits recognition for specific consonants, even in noise. 5aSC27. Driving performance and auditory distractions. Elzbieta B. Slawinski, Jane F. MacNeil, Mona Motamedi, Benjamin R. Zendel Psych. Dept., Univ. of Calgary, 2500 Univ. Dr., Calgary, AB, Canada T2N 1N4, Kirsten Dugdale, and Michelle Johnson Univ. of Calgary., Calgary, AB, Canada T2N 1N4 Driving performance depends on the ability to divide attention during different tasks. In spite of the fact that driving abilities are associated with visual stimulation, driving performance depends on attention to stimulation and/or auditory distraction. Research shows that listening to the radio is a principal auditory distracter during the time of driving Brodsky, 2002. In the laboratory a few experiments were conducted on the auditory distraction e.g., music, stories and signal processing by young and older drivers. Results show that older subjects involved in listening to the stream of information independent of the hearing status require higher intensity of the auditory stimulation than younger drivers. It was shown that cognition plays a role while listening to auditory stimuli. Moreover, it was demonstrated that driving performance was influenced by the type of performed music. A portion of these experiments and their results were presented at the Annual Meetings of CAA in 2002 and 2003 as well as being published in the Journal of Psychomusicology 18, 203–209. Complete results of the experiments will be discussed. 5aSC28. Speech intelligibility index calculations in light aircraft cabin during flight. Tino Bucak and Ernest Bazijanac Dept. of Aeronautics, Faculty of Transport and Traffic Eng., Univ. of Zagreb, Croatia High levels of cabin noise in small general aviation aircraft significantly deteriorate the quality of speech communications and potentially endanger the safety of flight. Several ground and inflight cabin noise measurements on new generation Cessna 172R were made during various phases of flight. The results are analyzed and used for Speech Intelligibility Index SII calculations, in order to quantify the influence of cabin noise on speech communications between crew members. 5aSC29. A detailed study on the effects of noise on speech reception. Tammo Houtgast and Finn Dubbelboer VU Univ. Medical Ctr., Amsterdam, The Netherlands The effect of adding continuous noise to a speech signal was studied by comparing, for a series of quarter octave bands, the band output for the original speech and for the speech-plus-noise. Three separate effects were identified. a Average envelope-modulation reduction: the original intensity-envelope is, on average, raised by the mean noise intensity, resulting in a reduction of the original modulation index. b Random instantaneous envelope fluctuations: on an instantaneous basis, the speechplus-noise envelope shows random variations, caused by the stochastic nature of the noise, and by the instantaneous changes in the phase relation between the speech and the noise. c Perturbations of the carrier phase: in the band output carrier signal the addition of the noise causes random phase changes. By applying signal processing techniques, we were able to either include or exclude each of these three effects separately. The results of intelligibility measurements indicated the following order of importance of the three different effects: 1 the average envelope-modulation reduction, 2 the perturbation of the carrier phase, and 3 the random envelope fluctuations. The results will be discussed in the light of modeling and enhancing noise suppression schemes speech reception in noise. 5aSC30. Speech rate characteristics in dysarthria. Kris Tjaden, Geoff Greenman, Taslim Juma, and Roselinda Pruitt Dept. of Communicative Disord., Univ. at Buffalo, 122 Cary Hall, 3435 Main St., Buffalo, NY 14214, tjaden@acsu.buffalo.edu Speech rate disturbances are pervasive in dysarthria, with some reports suggesting that up to 80% of speakers with dysarthria exhibit speech rates that differ from neurologically normal talkers. The contribution of articulation time and pause time to the overall impairment in speech rate is not well understood. Studies investigating speech rate characteristics in dysarthria also tend to focus on reading materials, yet there is reason to suspect that the higher cognitive load of conversational speech may impact speech rate characteristics differently for individuals with impaired speech motor control, and neurologically normal talkers. The current study will report speech rate characteristics for both a reading passage and conversational speech produced by individuals with dysarthria secondary to Multiple Sclerosis MS, individuals with dysarthria secondary to Parkinson Disease PD and healthy controls. The manner in which speech rate, articulation rate, and pause characteristics differ for speakers with dysarthria and healthy controls will be examined. The contribution of articulation time and pause time to overall speech rate also will be studied and compared for the reading passage and conversational speech. Work supported by NIDCD R01DC04689. 2608 J. Acoust. Soc. Am., Vol. 117, No. 4, Pt. 2, April 2005 149th Meeting: Acoustical Society of America 2608

FRIDAY MORNING, 20 MAY 2005 GEORGIA A, 8:30 TO 11:35 A.M. Session 5aSP Signal Processing in Acoustics: Smart Acoustic Sensing for Land-Based Surveillance Brian Ferguson, Chair Defense Science and Technology Organization Maritime Systems Div., Pyrmont 2009, Australia Chair’s Introduction—8:30 Invited Papers 8:35 5aSP1. Acoustic methods for tactical surveillance. Brian G. Ferguson and Kam W. Lo Defence Sci. and Technol. Organisation, P.O. Box 44, Pyrmont, NSW 2009, Australia Smart acoustic sensor systems can be deployed for the automatic detection, localization, classification and tracking of military activities, which are inherently noisy. Acoustic sensors are appealing because they are passive, affordable, robust, and compact. Also, the propagation of sound energy is not limited by obstacles that block or obscure the clear line of sight that is required for the effective operation of electromagnetic systems. Methods, with examples, for extracting tactical information from acoustic signals emitted by moving sources air and ground vehicles are provided for both single sensor and multiple sensor configurations. The methods are based on processing either the narrowband or broadband spectral components of the sources’ acoustic signature. Weapon firings generate acoustic impulses and supersonic projectiles generate shock waves enabling source localization and classification by processing the signals received by spatially-distributed sensors. The methods developed for land-based acoustic surveillance using microphone data are also applied to hydrophone data for passive acoustic surveillance of the underwater environment. 9:05 5aSP2. Autonomous acoustic sensing on mobile ground and aerial platforms. Tien Pham and Nassy Srour US Army Res. Lab., 2800 Powder Mill Rd., Adelphi, MD 20783-1197 Acoustic sensor systems on the ground and/or in the air can be used effectively for autonomous and remote intelligence surveillance and reconnaissance ISR applications. Acoustic sensors can be used as primary sensors and/or secondary sensors to cue other higher-resolution sensors for detection, tracking and classification of continuous and transient battlefield acoustic events such as ground vehicles, airborne aircraft, personnel, indirect fire, and direct fire. Current collaborative research activities at ARL in acoustic sensing from mobile ground platforms such as HMMWVs and small robotic vehicles P. Martin and S. Young, Proc. of SPIE Defense & Security Symposium, 2004 and from aerial platforms such as UAVs and balloons Reiff, Pham et al, Proc. of the 24th Army Science Conference, 2004 demonstrate practical performance enhancements over fixed ground-based platforms for a number of ISR applications. For both mobile ground and aerial platforms, self-generated noise flow noise and platform noise is problematic but they can be suppressed with specialized windscreens, sensor placement, and noise cancellation technology. Typical acoustic detection and processing results for mobile platforms are compared and contrasted against fixed ground-based platforms. 9:35 5aSP3. Ferret and its applications. Jacques Bedard Defence R&D Canada–Valcartier, 2459 Pie XI North, Val-Belair, QC, Canada G3J 1X5 Ferret is an acoustic system that detects, recognizes and localizes the source and direction of small arms fire. The system comprises a small array of microphones and pressure sensors connected to a standard PC-104 computer that analyzes, displays, reports and logs the parameters of a recognized shot. The system operates by detecting and recognizing the ballistic shock wave created by the supersonic bullet, combined with the muzzle blast wave propagating from the weapon. The Canadian Land Force Test and Evaluation Unit evaluated a vehicle-mounted version of the system and recommended deployment of the system during peacekeeping missions. The system is the result of a collaborative effort between Defence R&D Canada and MacDonald Dettwiler and Associates. This presentation describes the hardware and software components of the system along with the current and future applications of the system. 5a FRI. AM 2609 J. Acoust. Soc. Am., Vol. 117, No. 4, Pt. 2, April 2005 149th Meeting: Acoustical Society of America 2609

<strong>FRIDAY</strong> <strong>MORNING</strong>, <strong>20</strong> <strong>MAY</strong> <strong>20</strong>05<br />

GEORGIA A, 8:<strong>30</strong> <strong>TO</strong> 11:35 A.M.<br />

Session 5aSP<br />

Signal Processing in Acoustics: Smart Acoustic Sensing for Land-Based Surveillance<br />

Brian Ferguson, Chair<br />

Defense Science and Technology Organization Maritime Systems Div., Pyrmont <strong>20</strong>09, Australia<br />

Chair’s Introduction—8:<strong>30</strong><br />

Invited Papers<br />

8:35<br />

5aSP1. Acoustic methods for tactical surveillance. Brian G. Ferguson and Kam W. Lo Defence Sci. and Technol. Organisation,<br />

P.O. Box 44, Pyrmont, NSW <strong>20</strong>09, Australia<br />

Smart acoustic sensor systems can be deployed for the automatic detection, localization, classification and tracking of military<br />

activities, which are inherently noisy. Acoustic sensors are appealing because they are passive, affordable, robust, and compact. Also,<br />

the propagation of sound energy is not limited by obstacles that block or obscure the clear line of sight that is required for the effective<br />

operation of electromagnetic systems. Methods, with examples, for extracting tactical information from acoustic signals emitted by<br />

moving sources air and ground vehicles are provided for both single sensor and multiple sensor configurations. The methods are<br />

based on processing either the narrowband or broadband spectral components of the sources’ acoustic signature. Weapon firings<br />

generate acoustic impulses and supersonic projectiles generate shock waves enabling source localization and classification by processing<br />

the signals received by spatially-distributed sensors. The methods developed for land-based acoustic surveillance using<br />

microphone data are also applied to hydrophone data for passive acoustic surveillance of the underwater environment.<br />

9:05<br />

5aSP2. Autonomous acoustic sensing on mobile ground and aerial platforms. Tien Pham and Nassy Srour US Army Res. Lab.,<br />

28<strong>00</strong> Powder Mill Rd., Adelphi, MD <strong>20</strong>783-1197<br />

Acoustic sensor systems on the ground and/or in the air can be used effectively for autonomous and remote intelligence surveillance<br />

and reconnaissance ISR applications. Acoustic sensors can be used as primary sensors and/or secondary sensors to cue other<br />

higher-resolution sensors for detection, tracking and classification of continuous and transient battlefield acoustic events such as<br />

ground vehicles, airborne aircraft, personnel, indirect fire, and direct fire. Current collaborative research activities at ARL in acoustic<br />

sensing from mobile ground platforms such as HMMWVs and small robotic vehicles P. Martin and S. Young, Proc. of SPIE Defense<br />

& Security Symposium, <strong>20</strong>04 and from aerial platforms such as UAVs and balloons Reiff, Pham et al, Proc. of the 24th Army<br />

Science Conference, <strong>20</strong>04 demonstrate practical performance enhancements over fixed ground-based platforms for a number of ISR<br />

applications. For both mobile ground and aerial platforms, self-generated noise flow noise and platform noise is problematic but they<br />

can be suppressed with specialized windscreens, sensor placement, and noise cancellation technology. Typical acoustic detection and<br />

processing results for mobile platforms are compared and contrasted against fixed ground-based platforms.<br />

9:35<br />

5aSP3. Ferret and its applications. Jacques Bedard Defence R&D Canada–Valcartier, 2459 Pie XI North, Val-Belair, QC, Canada<br />

G3J 1X5<br />

Ferret is an acoustic system that detects, recognizes and localizes the source and direction of small arms fire. The system<br />

comprises a small array of microphones and pressure sensors connected to a standard PC-104 computer that analyzes, displays, reports<br />

and logs the parameters of a recognized shot. The system operates by detecting and recognizing the ballistic shock wave created by<br />

the supersonic bullet, combined with the muzzle blast wave propagating from the weapon. The Canadian Land Force Test and<br />

Evaluation Unit evaluated a vehicle-mounted version of the system and recommended deployment of the system during peacekeeping<br />

missions. The system is the result of a collaborative effort between Defence R&D Canada and MacDonald Dettwiler and Associates.<br />

This presentation describes the hardware and software components of the system along with the current and future applications of the<br />

system.<br />

5a FRI. AM<br />

2609 J. Acoust. Soc. Am., Vol. 117, No. 4, Pt. 2, April <strong>20</strong>05 149th Meeting: Acoustical Society of America 2609

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!