This paper proposes a learning model that enables a robot to acquire a body image for parts of its body that are invisible to itself. A robot estimates the invisible hand positions using the Jacobian between the displacement of the joint angles and the optical flow of the hand. When the hand touhches one of the invisible tactile sensor units on the face, the robot associates this sensor unit with the estimated hand position. In addition, we propose a model to discriminate the tactile sensor units each of which is corresponded to each facial part based on the discontinuity of sensor value. Then finally, the robot becomes able to associate them with the feature points of parts in the image of other's face.