The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Online ISSN : 2424-3124
2023
Session ID : 1A2-F20
Conference information

Multi-Fingered Dragging of Unknown Objects and Orientations Using Distributed Tactile Information Through Vision-Transformer and LSTM
*Takahisa UENOSatoshi FUNABASHIHiroshi ITOAlexander SCHMITZTetsuya OGATAShigeki SUGANO
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Details
Abstract

Multi-fingered hands can achieve stable grasping manipulations as their fingers synchronously make contacts with an object and demonstrate skillful motions such as dragging of unknown objects. Abundant tactile is useful specifically for multi-fingered manipulation which causes visual occlusions. However, generating dexterous motions with high density tactile sensors is difficult due to complicated contact states. In this paper, we propose a novel deep predictive learning approach using Vision-Transformer (ViT) and Long-Short Term Memory (LSTM). In this method the ViT extracts tactile information that is important for motions, while the LSTM remembers the orientation and characteristics of objects from tactile information and link them to motions. The model uses the 16 joint angle measurements and 912 tactile measurements which are distributed on the finger and the palm of the Allegro Hand. The model achieved high success rates of 95% for dragging motion by adapting to the target object orientation and feature.

Content from these authors
© 2023 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top