With information technologies being increasingly involved in areas such as (online) shopping, entertainment or advertisement, computer systems are bound to be able to process
Kansei information, i.e. information relevant to users' sensibilities. Rather than modeling the biology of users' sensibilities, we suggest a functional approach by modelling the translation process between different modalities of expression of a same
Kansei concept. We hypothesizethat this translation process can be grounded into the categorization of users's perception, i.e. the extraction of structures in the multimedia information. Because this translation process is intrisincally variable, we propose a computational agent, called K-Agent, able to learn categories in its visual perception and interactively evolve a
translation language. The K-Agent consists of three main modules: a multi-features image processing unit, a learning kernel that iteratively constructs the translation language, and a feedback interpreter system which incorporates self-supervision and user feedback to structurally tune the learning kernel. The concept of K-Agent has been evaluated in a real-world application involving user
Kansei, more specifically, the filtering of images against a given user
Kansei impression. Our experimental results demonstrate the feasibility of the concept as well as a superior performance compared to manually filtering the output of existing search engines.
抄録全体を表示