ロボティクス・メカトロニクス講演会講演概要集
Online ISSN : 2424-3124
セッションID: 2P1-G03
会議情報
2P1-G03 Segmenting Sound Signals and Articulatory Movement using Recurrent Neural Network toward Phoneme Acquisition
神田 尚尾形 哲也駒谷 和範奥乃 博
著者情報
会議録・要旨集 フリー

詳細
抄録
This paper proposes a computational model for phoneme acquisition by infants. Infants perceive speech not as discrete phoneme sequences but as continuous acoustic signals. One of critical problems in phoneme acquisition is the design for segmenting these continuous speech. The key idea to solve this problem is that articulatory mechanisms such as the vocal tract help human beings to perceive sound units corresponding to phonemes. To segment acoustic signal with articulatory movement, our system was implemented by using a physical vocal tract model, called the Maeda model, and applying a segmenting method using Recurrent Neural Network with Parametric Bias (RNNPB). This method determines segmentation boundaries in a sequence using the prediction error of the RNNPB model, and the PB values obtained by the method can be encoded as kind of phonemes. Experimental results demonstrated that our system could self-organize the same phonemes in different continuous sounds. This suggests that our model reflects the process of phoneme acquisition.
著者関連情報
© 2008 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top