Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 02, 2018 - June 05, 2018
In recent years, the input interface has evolved diversely, and the inputs to the interface varies from gestures to voices. However, such an interface is predefined for input and output by the designer, and it is not known whether the interface moves as intended by the user. In addition, the interface tends to lose the intuitiveness, as the degree of freedom of the controlled object increases, and to be complicated. It is a big challenge for designers to create an intuitive and easy-to-use interface for all users. In this research, we extract human intention from body movements, especially hand movements, and associate with movements of multi-degree of freedom spider robot, thereby constructing interface to operate robot only from hand movements. Since the hand movements reflects the intention that varies depending on the individual user, the correspondence between the hand movements and the operation of the controlled object is built using machine learning. This aims to create an interface optimized for individual users. The interface constructed in this way had difficulty in operation performance in one subject, but showed sufficient operation performance in other subjects.