Abstract
This paper presents an attempt at ‘non-verbal mapping’ between music and images in the field of Kansei information processing. We use physical parameters of sound and color, to clarify their direct correspondence when these are changed continuously. First we derive a mapping rule between sound and color from those with such special abilities as ‘colored hearing’. Next we apply the mapping to everyday people using a paired comparison test and key identification training, and confirm that this mapping is also acceptable by them.