抄録
In robotic manipulation visual servoing is important to achieve dexterous and accurate handling. And one of the most important problems is occlusion by the robot's body. It is difficult to know the position and orientation of the object being manipulated when the robot manipulator handles it because it is hidden by the robot itself. If its position and orientation are estimated considering the occlusion by the robot, the robot can manipulate it by visual servoing more dextrously. In this paper, we propose an algorithm to estimate the position and orientation of the manipulated object during manipulation by using a 3D sensor. The 3D information of the total environment is observed by the 3D sensor, and the observed 3D point cloud information is classified into the manipulator, the object, and the others based on their 3D model. The proposed algorithm is verified in an actual robot manipulation system.