Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 05, 2019 - June 08, 2019
Representation and modeling of flexible objects is relatively difficult because these objects are deformed by external forces. When robots handle these objects, it is crucial to predict the deformation caused by external forces they add during manipulation. Somatosensory information has essential information to predict the deformation, but a method for prediction of deformation by multi-mordal information is not established. In this paper, we conducted experiments to predict cloth deformation images not by directly modeling the objects, but by using a deep neural network model. The model consists of Long-Short-Term-Memory module, which has visual images and somatosensory information as input. We manipulated dual-arm manipulator, handled cloth and collected visual and somatosensory data. Compared to inputting only visual images, the model is able to output vivid and long term prediction by using both visual images and somatosensory data.