ライフサポート
Online ISSN : 1884-5827
Print ISSN : 1341-9455
ISSN-L : 1341-9455
口唇変位計測による読唇コミュニケーション支援システムの開発
張 斌舟久保 昭夫福井 康裕
著者情報
ジャーナル フリー

1998 年 10 巻 3 号 p. 106-110

詳細
抄録

It is well known that acoustic language signals depend on the movement of the visible facial articulators to such an extent that a great amount of phonetic information can be extracted from lip-reading alone. An automated speech recognition system, for voice-impaired people, based on data from lip movement, is both desirable and feasible for sentence recognition. Unlike the usual lip-reading experiment, in this paper we researched a minimized auto-lipreading system, which can be used to help voice-impaired people under certain circumstances. In our system, a video camera is set up in front of the subject to continuously monitor the subject's facial images and pass them to image processing equipment. We decode discourse based only on the movements of three points in the mouth area, which are traced through image processing. The data for a whole sentence is correlated with predefined sentences through a back propagation neural network. The system attained a 86% correct recognition rate in a 20-sentence recognition task.

著者関連情報
© ライフサポート学会
前の記事 次の記事
feedback
Top