抄録
This paper proposes a hierarchical model which is composed of a slow feature analysis (SFA) network to extract multi-modal representation of a humanoid robot. The experiment with humanoid robot shows that the network can integrate multi-modal information and detect semantic features by the extraction of the slowly varying features from the high-dimensional input sensory signal, and it shows that the multi-modal representation is useful as state representation for reinforcement learning compared with using state representation without the integration of the multi-modal information.