Image processing is one of the methods used to measure position/attitude for robot control and there are hopes that it can be applied to space robot missions, including REX-J (Robot EXperiment on ISS/JEM). The REX-J mission involves space robot locomotive function experiments using tethers by JAXA. Measuring the robot's motion accurately is crucial to establishing the new locomotive technology using tethers. With conventional methods, a suitable illumination environment is configured for high-precision image processing and a characteristic marker is attached to the measurement object. However, the two challenges posed for image processing during the REX-J mission are: (1) the illumination of space changes significantly with orbital motion and (2) the robot lacks a characteristic marker. Accordingly, our purpose is to develop a marker less image processing method for the illumination environment of space and measure the robot's position/attitude of the REX-J mission by image processing. The proposed new image processing method involves creating virtual points are created at the intersection of the robot's edge in the image, which are then used as markers for image processing. This method is robust for changes in the illumination environment because it allows the creation of a virtual point, even if the edge is incomplete. The method is applied to the REX-J mission and the measurement accuracy of the robot's position/attitude in the illumination environment of space was confirmed as on the sub-pixel level. Subsequently, the position/attitude of the robot during movement by tethers was measured by image processing. In addition, the error in the robot's position/attitude, as estimated from the length of the tethers, was clarified by the image processing result. Based on these results, the robot's locomotive function by the REX-J mission was verified.