Abstract
This paper describes a novel approach to recognize human's behavior precisely by using three dimensional information of objects in surroundings. To estimate object's label, position and orientation in surroundings, we use RGB-D point clouds data and three dimensional feature matching algorithm. By using this object's label and position, we propose relationship descriptor between human motion and manipulated objects and Motion-Object Language model. This descriptor and model makes it possible to recognize human's behavior precisely. If we can recognize human's behavior in detail, it can be possible that robots support us subjectively and manipulate objects without our directions.