電気学会論文誌C(電子・情報・システム部門誌)
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
音声と直指操作による入力インターフェース
中川 聖一張 建新
著者情報
ジャーナル フリー

1994 年 114 巻 10 号 p. 1009-1017

詳細
抄録

There has been an increase in interest in the development of human-interface system that incorporates multimodal input-methods.
In this study, an Automaton-Controlled One Pass Viterbi program was developed to recognize speaker independent continuous speech. The base-line recognition mechanism is based on the One Pass Viterbi algorithm using the concatenation of syllable HMMs(Hidden Markov Models). In this program, if the dictionary and automaton information files based on the definition of finite state automaton are prepared, all kinds of sentences for the automaton can be accepted.
As an application of the speech recognition, the input methods by speech and by touch-screen were studied. Since these two methods are complementary each other, they can be used together to create more userfriendly human interface. To illustrate this, a multi-modal robot control simulation system was built on the Sun Sparc10 (using the attached AD converter with 11.025kHz sampling rate). To evaluate this system, a robot control simulation system using only the speech input method and a simulation system using only the touch-screen input method were also developed. After evalvating these systems, it was found out that the multi-modal system incorporating both speech input and touch-screen input was able to cover up the deficiencies that were in the use of only touch-screen input and to improve speech recognition speed and error rate. Also, by incorporating both speech input and touch-screen input, the operational errors became infrequent and more kinds of useful operations could be performed as compared to using only speech or only touch-screen inputs.

著者関連情報
© 電気学会
前の記事 次の記事
feedback
Top