In stuties on vision for intelligent robots it is one of the most important subjects to extract from two-dimensional images three-dimensional information such as normals to the object surface and distances from the camera. Numerous methods have been presented so far. A method using coneshaped beams of light has been proposed by the authors. The method can measure the normal vectors and the three-dimensional coordinates of points on the object surface. However, some problems still exist in this method ; for example, uniqueness of the solution, measurement errors, resolution of the system, etc.
In order to solve those problems, the present study presents a new method using both projected patterns and advance of a camera. In this method, the changes of the center positions of the patterns caused from camera advance are used for determining depths from the camera. Then, the determined depths and apparent distortions of the projected patterns are used for the calculation of the local surface normals. Finally, the solution is improved by using the global information of the objects and information of pattern sizes at the central region of the observed scene. Results of the experiements both with simulated images and with real images are also reported. The present method is advantageous over the previous methods in reliability, acccuracy, and resolution.
View full abstract