2021 年 40 巻 1 号 p. 27-31
In robotics, developments are in progress to enhance the physical and cognitive capabilities of humans by technologically extending the human body. The method of mapping the degrees of freedom of the operator’s arms and legs to the operating commands has been developed to control an extended body such as a wearable robot arm. On the other hand, the method has the limitation that the freedom of the operator’s limbs is constricted. Because the movements of the limbs are mapped to the actions of the wearable robot arm. In this article, a research project is introduced that uses virtual reality, wearable sensing technology, and machine learning to map human head movements and facial expressions to control an extended body that can be independent of the actions of human limbs.