10.07.2015 Views

第22回 ロボット聴覚特集 - 奥乃研究室 - 京都大学

第22回 ロボット聴覚特集 - 奥乃研究室 - 京都大学

第22回 ロボット聴覚特集 - 奥乃研究室 - 京都大学

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Figure 8: BSS ......FFTLeft-channel inputRight-channel input......FFT FFT FFT FFT FFT FFT FFT FFTSIMO-ICAFilter updatingin 3 s durationReal-time filteringSIMO-ICAFilter updatingin 3 s duration... ...W(f)W(f)Real-time filteringW(f)Binary Binary Binary Binary Binary Binary Binary BinaryMask Mask Mask Mask Mask Mask Mask MaskSeparated signal reconstruction with Inverse FFT...TimeFigure 9: ICA [1] K. Nakadai, D. Matsuura, H. Okuno, and H. Kitano,“Applying scattering theory to robot auditionsystem: robust sound source localization and extraction,”Proc. IROS-2003, pp.1147–1152, 2003.[2] R. Nishimura, T. Uchida, A. Lee, H. Saruwatari, K.Shikano, and Y. Matsumoto, “ASKA: Receptionistrobot with speech dialogue system,” Proc. IROS-2002, pp.1314–1317, 2002.[3] R. Prasad, H. Saruwatari, and K. Shikano, “Robotsthat can hear, understand and talk,” AdvancedRobotics, vol.18, pp.533–564, 2004.[4] P. Comon, “Independent component analysis, a newconcept?,” Signal Processing, vol.36, pp.287–314,1994.[5] N. Murata and S. Ikeda, “An on-line algorithm forblind source separation on speech signals,” Proc.NOLTA98, vol.3, pp.923–926, 1998.[6] P. Smaragdis, “Blind separation of convolved mixturesin the frequency domain,” Neurocomputing,vol.22, pp.21–34, 1998.[7] L. Parra and C. Spence, “Convolutive blind separationof non-stationary sources,” IEEE Trans. Speech&AudioProcessing, vol.8, pp.320–327, 2000.[8] H. Saruwatari, S. Kurita, K. Takeda, F. Itakura,T. Nishikawa, and K. Shikano, “Blind source separationcombining independent component analysisand beamforming,” EURASIP Journal on AppliedSignal Processing, vol.2003, pp.1135–1146, 2003.[9] S. F. Boll, “Suppression of acoustic noise inspeech using spectral subtraction,” IEEE Trans.Acoust., Speech & Signal Process., vol.ASSP-27,no.2, pp.113–120, 1979.[10] T. Takatani, T. Nishikawa, H. Saruwatari,and K. Shikano, “High-fidelity blind separationof acoustic signals using SIMO-model-basedICA with information-geometric learning,” Proc.IWAENC2003, pp.251–254, 2003.[11] T. Takatani, T. Nishikawa, H. Saruwatari, and K.Shikano, “High-fidelity blind separation of acousticsignals using SIMO-model-based independentcomponent analysis,” IEICE Trans. Fundamentals,vol.E87-A, no.8, pp.2063–2072, 2004.[12] R. Lyon, “A computational model of binaural localizationand separation,” Proc. ICASSP83, pp.1148–1151, 1983.[13] N. Roman, D. Wang, and G. Brown, “Speechsegregation based on sound localization,” Proc.IJCNN01, pp.2861–2866, 2001.[14] M. Aoki, M. Okamoto, S. Aoki, H. Matsui, T.Sakurai, and Y. Kaneda, ”Sound source segregationbased on estimating incident angle of each frequencycomponent of input signals acquired by multiplemicrophones,” Acoustical Science and Technology,vol.22, no.2, pp.149–157, 2001.[15] H. Sawada, R. Mukai, S. Araki, and S. Makino,“Polar coordinate based nonlinear function for frequencydomain blind source separation,” IEICETrans. Fundamentals, vol.E86-A, no.3, pp.590–596,2003.[16] H. Sawada, R. Mukai, S. Araki, and S. Makino, “Arobust and precise method for solving the permutationproblem of frequency-domain blind sourceseparation,” Proc. Int. Sympo. on ICA and BSS,pp.505–510, 2003.[17] M. Aoki and K. Furuya, ”Using spatial informationfor speech enhancement,” Technical Report of IE-ICE, vol.EA2002-11, pp.23–30, 2002 (in Japanese).28

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!