10.07.2015 Views

第22回 ロボット聴覚特集 - 奥乃研究室 - 京都大学

第22回 ロボット聴覚特集 - 奥乃研究室 - 京都大学

第22回 ロボット聴覚特集 - 奥乃研究室 - 京都大学

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.04.03.53.53.03.0Position Y (m)2.52.01.5Position Y (m)2.52.01.51.01.00.50.500 1.0 2.0 3.0 4.0 5.0 6.0 7.0Position X (m)00 1.0 2.0 3.0 4.0 5.0 6.0 7.0Position X (m)a) Ultrasonic Three Dimensional Tag Systemb) Microphone Array SystemFigure 7: Tracking of A Moving Sound Source with the HeadingNOE AIST [1] P. Aarabi and S. Zaky. Robust sound localization using multi-sourceaudiovisual information fusion. Information Fusion, 2(3):209–223,2001.[2] R. Biswas and S. Thrun. A passive approach to sensor network localization.In IEEE, editor, Proc. of the IEEE/RSJ Intl. Conference onIntelligent Robots and Systems (IROS 2004), pages 1544–1549, 2004.[3] H. K. Dunn and D. W. Farnsworth. Exploration of pressure field aroundthe human head during speech. Journal of Acoustical Society of America,10(1):184–199, 1939.[4] J.L. Flanagan, D.A. Berkley, G.W. Elko, J.E. West, and M.M. Sondhi.Autodirective microphone systes. Acustica, 73(2):58–71, 1991.[5] L.J. Griffiths and C.W. Jim. An alternative approach to linearly constrainedadaptive beamforming. IEEE Transactions on Antennas andPropagation, AP-30(8):27–34, 1982.[6] E. T. Hall. The Hidden Dimension. Anchor books doubleday, 1966.[7] I. Hara, F. Asano, H. Asoh, J. Ogata, N. Ichimura, Y. Kawai, F. Kanehiro,H. Hirukawa, and K. Yamamoo. Robust speech interface basedon audio and video information fusion for humanoid hrp-2. In Proc. ofIEEE/RAS International Conference on Intelligent Robots and Systems(IROS-2004), pages 2404–2410. IEEE, 2004.[8] J. Hershey, H. Ishiguro, and J. R. Movellan. Audio vision: Using audiovisualsynchrony to locate sounds. In Neural Information ProcessingSystems, volume 12, pages 813 – 819. MIT Press, 2000.[9] C. Jutten and J. Herault. Blind separation of sources, part I: An adaptivealgorithm based on neuromimetic architecture. Signal Processing,24(1):1–10, 1991.[10] Y. Kaneda and J. Ohga. Adaptive microphone-array system for noisereduction. IEEE Transactions on Acoustics Speech Signal Processing,ASSP-34(6):1391–1400, 1986.[11] P.C. Meuse and H.F. Silverman. Characterization of talker radiationpattern using a microphone-array. In Proc. of International Conferenceon Acoustics, Speech, and Signal Processing (ICASSP-94), volume II,pages 257–260, 1994.[12] P.M. Morese and K.U. Ingard. Theoretical Acoustics. McGraw-Hill,1968.[13] K. Nakadai, T. Lourens, H. G. Okuno, and H. Kitano. Active auditionfor humanoid. In Proceedings of 17th National Conference on ArtificialIntelligence (AAAI-2000), pages 832–839. AAAI, 2000.[14] K. Nakadai, D. Matsuura, H. G. Okuno, and H. Tsujino. Improvementof recognition of simultaneous speech signals using av integration andscattering theory for humanoid robots. Speech Communication, 44:97–112, 2004.[15] T. Nakatani and H. G. Okuno. Sound ontology for computational auditoryscene analysis. In Proceedings of 15th National Conference onArtificial Intelligence (AAAI-98), pages 1004–1010. AAAI, 1998.[16] Y. Nishida, H. Aizawa, T. Hori, N.H. Hoffman, T. Kanade, andKakikura M. 3D ultrasonic tagging system for observing human activity.In IEEE, editor, Proceedings of the 2003 IEEE/RSJ Intl. Conferenceon Intelligent Robots and Systems (IROS 2003), pages 785–791,2003.[17] H. Saruwatari, S. Kurita, K. Takeda, F. Itakura, T. Nishikawa, andK. Shikano. Blind source separation combining independent componentanalysis and beamforming. EURASIP Journal on Applied SignalProcessing, 2003(11):1135–1146, 2003.[18] H.F. Silverman, W.R. Patterson, and J.L. Flanagan. The huge microphonearray. Technical report, LEMS, Brown University, 1996.[19] Y. Tatekura, H. Saruwatari, and K. Shikano. Sound reproduction systemincluding adaptive compensation of temperature fluctuation effectfor broad-band sound control. IEICE Trans. Fundamentals, E85-A(8):1851–1860, 2002.[20] J.-M. Valin, F. Michaud, B. Hadjou, and J. Rouat. Localization of simultaneousmoving sound sources for mobile robot using a frequencydomainsteered beamformer approach. In IEEE, editor, Proc. IEEEInternational Conference on Robotics and Automation (ICRA 2004),2004.[21] E. Weinstein, K. Steele, A. Agarwal, and J. Glass. Loud: A 1020-nodemodular microphone array and beamformer for intelligent computingspaces. MIT/LCS Technical Memo MIT-LCS-TM-642, 2004.[22] S. Yamamoto, K. Nakadai, H. Tsujino, and H. G. Okuno. Assessmentof general applicability of robot audition system by recognizingthree simultaneous speeches. In IEEE, editor, Proc. of IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS-2004),pages 2111–2116, 2004.82

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!