IPSJ Transactions on Computer Vision and Applications
Online ISSN : 1882-6695
ISSN-L : 1882-6695
Audio-Visual Speech Recognition Using Convolutive Bottleneck Networks for a Person with Severe Hearing Loss
Yuki TakashimaYasuhiro KakiharaRyo AiharaTetsuya TakiguchiYasuo ArikiNobuyuki MitaniKiyohiro OmoriKaoru Nakazono
Author information
JOURNAL FREE ACCESS

2015 Volume 7 Pages 64-68

Details
Abstract

In this paper, we propose an audio-visual speech recognition system for a person with an articulation disorder resulting from severe hearing loss. In the case of a person with this type of articulation disorder, the speech style is quite different from with the result that of people without hearing loss that a speaker-independent model for unimpaired persons is hardly useful for recognizing it. We investigate in this paper an audio-visual speech recognition system for a person with severe hearing loss in noisy environments, where a robust feature extraction method using a convolutive bottleneck network (CBN) is applied to audio-visual data. We confirmed the effectiveness of this approach through word-recognition experiments in noisy environments, where the CBN-based feature extraction method outperformed the conventional methods.

Content from these authors
© 2015 by the Information Processing Society of Japan
Previous article Next article
feedback
Top