Abstract
This paper describes the construction of a system which transforms a theme music fitting to story scenes represented by texts and/or pictures, and generates variations on the theme music. Inputs to the proposal system are an original theme music and numerical information on given story scenes. The present system varies (1) melodies, (2) tempos, (3) tonalities, and (4)accompaniments of given theme music based on impressions of story scenes. Neural network models are applied to the music generation in order to reflect user's sensitivity on music and stories. This paper also describes the evaluation experiments to confirm whether the generated variations on theme music reflect impressions of generated story scenes appropriately or not.