主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2019
開催日: 2019/06/05 - 2019/06/08
Representation and modeling of flexible objects is relatively difficult because these objects are deformed by external forces. When robots handle these objects, it is crucial to predict the deformation caused by external forces they add during manipulation. Somatosensory information has essential information to predict the deformation, but a method for prediction of deformation by multi-mordal information is not established. In this paper, we conducted experiments to predict cloth deformation images not by directly modeling the objects, but by using a deep neural network model. The model consists of Long-Short-Term-Memory module, which has visual images and somatosensory information as input. We manipulated dual-arm manipulator, handled cloth and collected visual and somatosensory data. Compared to inputting only visual images, the model is able to output vivid and long term prediction by using both visual images and somatosensory data.