Abstract
This paper presents an imitation learning method, which enables an autonomous robot to extract demonstrator's characteristic motions by observing unsegmented human motions. To imitate another's motions through unsegmented interaction, the robot has to find what he learns from the continuous time series. The learning architecture is developed mainly based on a switching autoregressive model (SARM), a keyword extraction method based on minimum description length principle, and singular vector decomposition to reduce dimensionality of high dimensional human bodily motion. In most previous research on methods of robotic imitation learning, target motions that were given to robots were segmented into several meaningful parts by the experimenters in advance. However, to imitate certain behaviors from the continuous motion of a person, the robot needs to find segments that should be learned. To achieve this goal, the learning architecture converts the continuous time series into a discrete time series of letters by using SARM after reducing its dimensionality by using SVD. After the conversion, the proposed method finds characteristic motions by utilizing n-gram statistics referring to description length. In our experiment, a demonstrator displayed several unsegmented motions to a robot. The results revealed that the framework enabled the robot to obtain several prepared characteristic human motions.