2020 年 25 巻 4 号 p. 451-461
Supernumerary robotic arms are able to increase degree of freedoms which human has, but it requires a control method reflecting intention of the operator. To control the arms accord with the intention of the operator, we propose an operation method with his/her facial expressions. We propose to map facial expressions to supernumerary robotic arms operation. To measure facial expressions, we used a optical sensor-based approach (here inside a head-mounted display), the sensors data are fed to a SVM classifying them in facial expressions. The supernumerary robotic arms can then carry out operation by predicting the operator’s facial expressions. We made a virtual reality environment with supernumerary robotic arms and synchronizable avatar to investigate the most suitable mapping between facial expressions and supernumerary robotic arms.