抄録
This paper describes an interface for multi modal human-robot interaction, which enables people to introduce a newcomer robot about different attributes of objects and places in the room through speech commands and hand gestures. Robot generates an environment based on knowledge learned through communication with human and uses this map for navigation.