2009 年 129 巻 10 号 p. 1865-1873
Lip motion features are of practical use in identifying individuals. It is therefore important to develop non-contact type interface. For the interface using lip motion features, individual differences such as accents and dialects in commands should be accepted. In this paper, we propose a method to identify commands by analyzing three kinds of lip motion features. They are lip width, lip length, and ratio of width and length. The analysis is made on the basis of these features' relative values obtained from the primary and object frame. The proposed method has three steps. First, we extracted the lip motion features on the basis of both positions and shapes of lip in each frame of facial images. Second, standard patterns were created from features of six utterances per command. The standard pattern is able to reduce the relative difference in the lip motion features. Third, similarities among commands were computed by Dynamic-Programming (DP) matching. And then, the command with the largest similarity was selected as the target one.
Our experimental results suggest that proposed method is useful to construct the non-contact type interface of command input using lip motion features.
J-STAGEがリニューアルされました! https://www.jstage.jst.go.jp/browse/-char/ja/