Journal of Robotics, Networking and Artificial Life
Online ISSN : 2352-6386
Print ISSN : 2405-9021
Predicting the Weight of Grappling Noodle-like Objects using Vision Transformer and Autoencoder
Nattapat Koomklang Prem GamolpedEiji HayashiAbbe Mowshowitz
著者情報
ジャーナル オープンアクセス

2023 年 10 巻 1 号 p. 33-38

詳細
抄録
This research paper presents a novel approach for accurate weight estimation in robotic manipulation of noodle-like objects. The proposed approach combines vision transformer and autoencoder techniques with action data and RGB-D encoding to enhance the capabilities of robots in manipulating objects with varying weights. A deep-learning neural network is introduced to estimate the grasping action of a robot for picking up noodle-like objects using RGB-D camera input, a 6-finger gripper, and Cartesian movement. The hardware setup and characteristics of the noodle-like objects are described. The study builds upon previous work in RGB-D perception, weight estimation, and deep learning, addressing the limitations of existing methods by incorporating robot actions. The effectiveness of vision transformers, autoencoders, self-supervised deep reinforcement learning, and deep residual learning in robotic manipulation is discussed. The proposed approach leverages the Transformer network to encode sequential and spatial information for weight estimation. Experimental evaluation on a dataset of 20,000 samples collected from real environments demonstrates the effectiveness and accuracy of the proposed approach in grappling noodle-like objects. This research contributes to advancements in robotic manipulation, enabling robots to manipulate objects with varying weights in real-world scenarios.
著者関連情報
© 2023 ALife Robotics Corporation Ltd.

この記事はクリエイティブ・コモンズ [表示 - 非営利 4.0 国際]ライセンスの下に提供されています。
https://creativecommons.org/licenses/by-nc/4.0/deed.ja
前の記事 次の記事
feedback
Top