2013 年 18 巻 2 号 p. 141-150
Our contribution is to propose the method to augment food texture using sound AR system with crossmodal effect, to evaluate prototype implementation, and to confirm the validity of our system. There is cognitive scientific knowledge regarding the relation between food texture and sound. However, they are effective only in laboratory situations and are not suitable in everyday lives. Our research question is how the user interface design improves our daily eating experience and we focus on the food texture. Our hypothesis is that it is possible to augment the food texture by introducing sound in synchronization with the motion of the user's mastication. For this purpose, we have designed a device and software for detecting the mastication action and controlling the sound augmented reality system. Additionally, we have evaluated sound delay, frequency control, and effects of our system. And finally, we have demonstrated this system at a public conference to show the possibility of usage in a living environment.