2019 年 55 巻 11 号 p. 717-725
Visual servoing is capable of positioning robots based on images captured by cameras. To calculate the command value for robots, hand-designed image features and extraction of image features are required. Positioning accuracy is significantly influenced by the selection of the image features. In this study, we focus on the ability of convolutional neural networks (CNN) to extract features from images and output the angular velocity to control a manipulator. We propose a visual servoing technique based on CNN enabling the precise positioning of a texture less object grasped by a parallel gripper. The positioning can be achieved even the grasping position is different from the position when the target image was captured. The positioning accuracy of the proposed method is verified based on the positioning of an object into an alignment tray using a six-DOF manipulator. We confirmed that the proposed visual servoing technique can position an object precisely.