As the ubiquitous environment is taking root, there are calls for services that deliver content appropriate for the individual user's personal interests and preferences. Services are already being provided that deduce the interests or preferences of the user by analyzing his or her log data and provide content appropriate for his or her physical/mental/emotional state. A method of deducing the state of mind of the user has been proposed, which analyzes pictographic characters in emails sent by mobile phones. In this paper, we attempt to improve the accuracy of state-of-mind deduction by analyzing not only pictographic characters but also emoticons that many people use to express their feelings explicitly. We have developed an algorithm for extracting state-of-mind elements associated with each pictographic character or emoticon, and defining it with vector values of these elements. We have developed a prototype system, and verified the effectiveness of the algorithm by having a group of students use the system. We have applied the algorithm to the selection of music, which is considered to be close related to people's feelings. Specifically, we have proposed a method of selecting an appropriate piece of music based on a music type, which is represented by the "number of chords", "sound strength", and "melody pattern" in a piece of music.