ロボティクス・メカトロニクス講演会講演概要集
Online ISSN : 2424-3124
セッションID: 1A2-H12
会議情報

自己注意機構に基づいた視覚と触覚融合学習で物体の把持安定性予測
*秦 志達延 剛船橋 賢シュミッツ アレクサンダー菅野 重樹
著者情報
会議録・要旨集 認証あり

詳細
抄録

Predicting the grasp stability before lifting an object, i.e. whether a grasped object will move with respect to the gripper, gives more time to modify unstable grasps compared to after-lift slip detection. Recently, deep learning relying on visual and tactile information becomes increasingly popular. However, how to combine visual and tactile data effectively is still under research. In this paper, we propose to fuse visual and tactile data by introducing self-attention (SA) mechanisms for predicting grasp stability. In our experiments, we use two uSkin tactile sensors and one Spresense camera sensor. A past image of the object, not collected immediately before or during grasping, is used, as it might be more readily available. Dataset collection is done by grasping and lifting 35 daily objects 1050 times in total with various forces and grasping positions. As a result, the predicted accuracy improves over 2.89% compared to previous visual-tactile fusion research.

著者関連情報
© 2023 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top