ロボティクス・メカトロニクス講演会講演概要集
Online ISSN : 2424-3124
セッションID: 2A2-D12
会議情報

移動ロボットのナビゲーションのための人間の環境認識調査
*小野 貴史髙倉 洸太高橋 智一鈴木 昌人新井 泰彦青柳 誠司
著者情報
会議録・要旨集 フリー

詳細
抄録

Autonomous mobile robot is generally equipped with various sensors such as web-camera, laser range finder (LRF). It is difficult to extract appropriate data from huge sensor data stream. In this study, human ability and method of environmental recognition is experimentally investigated, in which a human is given information of image, 2D point cloud, and 3D point cloud data on a computer display, then drives a robot by remote control. These procedures are based on virtual private network. During the experiment, gaze point on the display is tracked by an eye mark recorder. After the experiment, a questionnaire survey on what the subject pays attention to, etc. is carried out. As the results, the subject could control the robot better by using image data compared to using point cloud data. It is because the human can understand the environment directly from image information; however, he has to reconstruct the 3D semantic environment from point cloud information in his brain.

著者関連情報
© 2017 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top