Abstract
This paper presents semantic grasping of everyday objects using knowledge model. The knowledge model describes the structure of everyday objects and information on their manipulation. The grasp areas and approach directions are extracted from the knowledge model and mapped on to the 3D point cloud data of the target when the intended purpose are given. Fine grasps suitable for the intended task are generated by performing a grasp planning.