主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2017
開催日: 2017/05/10 - 2017/05/13
Autonomous mobile robot is generally equipped with various sensors such as web-camera, laser range finder (LRF). It is difficult to extract appropriate data from huge sensor data stream. In this study, human ability and method of environmental recognition is experimentally investigated, in which a human is given information of image, 2D point cloud, and 3D point cloud data on a computer display, then drives a robot by remote control. These procedures are based on virtual private network. During the experiment, gaze point on the display is tracked by an eye mark recorder. After the experiment, a questionnaire survey on what the subject pays attention to, etc. is carried out. As the results, the subject could control the robot better by using image data compared to using point cloud data. It is because the human can understand the environment directly from image information; however, he has to reconstruct the 3D semantic environment from point cloud information in his brain.