2015 Volume 10 Issue 2 Pages 269-280
In the area of activity recognition with mobile sensors, a lot of works on context-aware systems using accelerometers have been proposed. Especially, mobile phones or remotes for video games using gesture recognition technologies enable easy and intuitive operations such as scrolling browser and drawing objects. Gesture input has an advantage of rich expressive power over the conventional interfaces, but it is difficult to share the gesture motion with other people through writing or verbally. Assuming that a commercial product using gestures is released, the developers make an instruction manual and tutorial expressing the gestures in text, figures, or videos. Then an end-user reads the instructions, imagines the gesture, then perform it. In this paper, we evaluate how user gestures change according to the types of the instruction. We obtained acceleration data for 10 kinds of gestures instructed through three types of texts, figures, and videos, totalling 44 patterns from 13 test subjects, for a total of 2,630 data samples. From the evaluation, gestures are correctly performed in the order of text→figure→video. Detailed instruction in texts is equivalent to that in figures. However, some words reflecting gestures disordered the users' gestures since they could call multiple images to user's mind.