The environments where robots are used are changing from orderly environments such as factories to diverse environments such as human living spaces. For robots to move autonomously in the environment, they require environmental maps composed of geometric features. However, it is difficult to prepare environmental maps in advance for diverse environments such as human living spaces; therefore, it is necessary to develop a method enabling robots to construct environmental maps autonomously. Recently, the use of a low–cost, user–friendly RGB—D sensor, such as Kinect, has attracted attention. However, because an RGB—D sensor is primarily used to record human movement, it is not suitable for highly precise measurement. Therefore, the accuracy of depth information is not high, and there are limitations when using the sensor to measure detailed shapes accurately. In this paper, we propose a sensor calibration method to acquire highly precise range–imaging data. In our proposed method, linear functions are employed for correction purposes by applying the method of least squares to each sensor and each pixel. Furthermore, we demonstrated that it is feasible to perform highly precise corrections of the point cloud data obtained from the RGB—D sensor by utilizing two types of correction formulas for short–range distances and long–range distances depending on the distance from the sensor.
View full abstract