The effect of music on human emotion has been studied for a long time. Research on emotions for music, for example the research on such as feelings and impressions when listening to music, has been established as one research field. However, although there were many studies that cause an emotion from a music, few researches on creating a music from an emotion have been performed.
Therefore, in this study, we focus on facial expressions as emotional representation and aim to create a music that matches the emotion recognized from a facial image. For example, the system, which generates a bright and pleasant music using a laughing face image, or a dark and sad music using a crying face image automatically, will be constructed. Russell’s circumplex model was used for emotion recognition, and Hevner’s circular scale was used to generate music corresponding to these emotions. By using this system, for example, it will become possible to create a suitable BGM for the scene with only the actor’s face image in the production of movies. In this study, the above-mentioned system was constructed and the efficiency of this system was confirmed by conducting the Kansei evaluation experiment.
View full abstract