Abstract
This paper describes a prototype system which generates operetta songs fitting to story scenes represented by texts and/or pictures. Inputs to the presented system are original theme music, numerical information on given story scenes and story texts. The presented system outputs sounds, playing songs generated based on impressions of story scenes. A song generation process consists of two phases, the initial song generation phase and the interaction phase. In the initial song generation phase, the presented system generates variations on theme music and lyrics according to image of music and lyrics obtained from numerical information on given story scenes. Evolutionary computation is applied to this phase. Using a vocal synthesizer and a general midi synthesizer, the present system plays songs as the variations on theme music with the lyrics. In the interaction phase, the present system reflects uses's Kansei to variations on theme music and lyrics by using interactive evolutionary computation. This paper also describes the pre evaluation experiments to confirm whether the generated songs reflect impressions of story scenes appropriately or not.