2017 年 22 巻 3 号 p. 379-389
The head-mounted displays (HMD) allow people to enjoy immersive VR experience. A virtual avatar can be the representative of a user in the virtual environment. However, the expression of the virtual avatar with a HMD user is constrained. A major problem of wearing an HMD is that a large portion of one's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between sensors and a face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. Our system can also reproduce facial expression change in real-time through an existing avatar by using regression.