主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2020
開催日: 2020/05/27 - 2020/05/30
Recently, AI equipped robots, smart speakers have appeared in homes, opportunities to come into contact with robots in general households and towns have increased, and various people also have a high possibility of controlling robots. To be able to control the robot smoothly, the robot side needs to understand human intention. In this paper, we propose a system that controls a robot from human ambiguous instructions, images of feet and object detection. In this system, Jetson Nano, USB microphone and USB camera are attached to the mobile robot for control. Control of the robot is performed only with body direction estimated by feet image of the user, voice instructions and Environment recognition using object detection without attaching special sensors. Ambiguity in speech recognition that do not limit the target using the directive are recognized.