Journal of Japan Society for Fuzzy Theory and Intelligent Informatics
Online ISSN : 1881-7203
Print ISSN : 1347-7986
ISSN-L : 1347-7986
Regular
Imitation learning from unsegmented human motion based on N-gram statistics of linear prediction models
Tadahiro TANIGUCHINaoto IWAHASHI
Author information
JOURNAL FREE ACCESS

2009 Volume 21 Issue 6 Pages 1143-1154

Details
Abstract
This paper presents an imitation learning method, which enables an autonomous robot to extract demonstrator's characteristic motions by observing unsegmented human motions. To imitate another's motions through unsegmented interaction, the robot has to find what he learns from the continuous time series. The learning architecture is developed mainly based on a switching autoregressive model (SARM), a keyword extraction method based on minimum description length principle, and singular vector decomposition to reduce dimensionality of high dimensional human bodily motion. In most previous research on methods of robotic imitation learning, target motions that were given to robots were segmented into several meaningful parts by the experimenters in advance. However, to imitate certain behaviors from the continuous motion of a person, the robot needs to find segments that should be learned. To achieve this goal, the learning architecture converts the continuous time series into a discrete time series of letters by using SARM after reducing its dimensionality by using SVD. After the conversion, the proposed method finds characteristic motions by utilizing n-gram statistics referring to description length. In our experiment, a demonstrator displayed several unsegmented motions to a robot. The results revealed that the framework enabled the robot to obtain several prepared characteristic human motions.
Content from these authors
© 2009 Japan Society for Fuzzy Theory and Intelligent Informatics
Previous article Next article
feedback
Top