日本ロボット学会誌
Online ISSN : 1884-7145
Print ISSN : 0289-1824
ISSN-L : 0289-1824
音源分離との統合によるミッシングフィーチャマスク自動生成に基づく同時発話音声認識
山本 俊一Jean-Marc Valin中臺 一博中野 幹生辻野 広司駒谷 和範尾形 哲也奥乃 博
著者情報
ジャーナル フリー

2007 年 25 巻 1 号 p. 92-102

詳細
抄録
Our goal is to realize a humanoid robot that has the capabilities of recognizing simultaneous speech. A humanoid robot under real-world environments usually hears a mixture of sounds, and thus three capabilities are essential for robot audition; sound source localization, separation, and recognition of separated sounds. In particular, an interface between sound source separation and speech recognition is important. In this paper, we designed an interface between sound source separation and speech recogniton by applying Missing Feature Theory (MFT) . In this method, spectral sub-bands distorted by sound source separation are detected from input speech as missing features. The detected missing features are masked on recognition not to affect the system badly. Therefore, this method is more flexible when noises change dynamically and drastically. It is the most important issue how distorted spectral sub-bands are detected. To solve the issue, we used speech feature apropriate for MFT-based ASR, and developed automatic missing feature mask generation. As a speech feature, we used a Mel-Scale Log Spectral (MSLS) feature instead of Mel-Frequency Cepstrum Coefficient (MFCC) which is commonly used for ASR. We presented a method of generating missing feature mask automatically by using information from sound source separation. To evaluate our method, we implemented it in a humanoid robotSIG2, and performed the experiments on recognition of three simultaneous isolated words. As a result, our method outperformed conventional ASR with MSLS feature.
著者関連情報
© 社団法人 日本ロボット学会
前の記事 次の記事
feedback
Top