Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 1O4-OS-29a-03
Conference information

An Approach to Emotion-based Music Generation using Diffusion Model
*Moyu KAWABEIchiro KOBAYASHI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Diffusion process-based models have been attracting attention in the field of music generation in recent years due to their high quality and scalability. Research has also been conducted to generate music on demand using diffusion models. However, it is not easy to control for complex attributes in diffusion models. In addition, there have not been many studies on music generation with an emphasis on emotion, which is closely related to music. In this study, we aim to develop a method that can generate a variety of music using a diffusion model, taking emotion as an input and controlling it according to the musical attributes corresponding to the emotion. For the diffusion model, we used the Diffusion-LM method, which can be controlled by using a classifier at each time denoising stage, and the classifier uses musical attribute values to identify emotions and generate music based on the input emotion information.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top