2011 Volume 10 Issue 4 Pages 523-534
This paper describes a system which composes operetta songs fitting to story scenes represented by texts and/or pictures. Inputs to the system are original theme music, numerical information on given story scenes and story texts. The system composes variations on theme music and lyrics according to image of music and lyrics obtained from numerical information on given story scenes. Evolutionary computation is applied to generations of variations and lyrics. Using a vocal synthesizer and a general midi synthesizer, the system plays operetta songs as the variations on theme music with the lyrics. The system reflects user's Kansei to variations on theme music and lyrics using interactive evolutionary computation. This paper also describes the evaluation experiments to confirm whether the composed songs reflect impressions of story scenes appropriately or not.