2017 Volume 10 Issue 5 Pages 476-485
Stereo vision is a well-known technique for vision-based 3D reconstruction of environments. Recently developed spherical cameras can be used to extend the concept to all 360° and provide LIDAR-like 360 degree 3D data with color information. In order to perform accurate stereo disparity estimation, the accurate relative pose between the two cameras, represented by the five degree of freedom epipolar geometry, needs to be known. However, it is always tedious to mechanically align and/or calibrate such systems. We propose a technique to recover the complete five degree of freedom parameters of the epipolar geometry in a single minimization with a dense approach involving all the individual pixel displacements (optical flow) between two camera views. Taking advantage of the spherical image geometry, a non-linear least squares optimization based on the dense optical flow directly minimizes the angles between pixel displacements and epipolar curves in order to align them. This approach is particularly suitable for dense 3D reconstruction as the pixel-to-pixel disparity between the two images can be calculated accurately and converted to a dense point cloud. Further, there are no assumptions about the direction of camera displacement. We demonstrate this method by showing some error evaluations, examples of successfully rectified spherical stereo pairs, and the dense 3D models generated from them.