Host: The Japanese Society for Artificial Intelligence
Name : The 26th Annual Conference of the Japanese Society for Artificial Intelligence, 2012
Number : 26
Location : [in Japanese]
Date : June 12, 2012 - June 15, 2012
Current music recommender systems only use basic information for recommending music to its listeners. These usually include artist, album, genre, tempo and other song information. Online recommender systems would include ratings and annotation tags by other people as well. We propose a recommender system that recommends music depending on how the listener wants to feel while listening to the music. The user-specific model we use is derived by analyzing brainwaves of the subject while he was actively listening to emotion-inducing music. The brainwaves are analyzed in order to derive the emotional state of the listener for different segments of the music. Using a motif discovery algorithm, we discover pairs of similar subsequences from the emotion data and find correlations with music and audio features from the song. Similar patterns are clustered and used for recommending music that invoke a similar emotional response from the listener.