主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2024
開催日: 2024/05/29 - 2024/06/01
In the context of a global labor shortage, the installation of robot handling systems has been steadily increasing. However, these systems are constrained by limitations in three-dimensional measurement capabilities, particularly the challenge of accurately measuring specular and transparent objects, which restricts the range of objects that can be handled. In this study, we focus on the capabilities of a hand-eye robot system that can capture images of a scene from various poses during object transportation. We propose a method that enables the three-dimensional measurement, grasping, and transportation of specular and transparent objects without significantly increasing the execution time of handling tasks. The proposed method involves performing semantic segmentation on images captured from various poses, followed by the application of shape from silhouette to conduct three-dimensional measurements based on the segmented images.