IPSJ Transactions on Computer Vision and Applications
Online ISSN : 1882-6695
ISSN-L : 1882-6695
Integrating Motion and Segmentation for Road Scene Labeling
Yousun KangKoichiro YamaguchiTakashi NaitoYoshiki Ninomiya
Author information

2010 Volume 2 Pages 121-131


Structure from motion (SfM) and appearance-based segmentation have played an important role in the interpretation of road scenes. The integration of these approaches can lead to good performance during interpretation since the relation between 3D spatial structure and 2D semantic segmentation can be taken into account. This paper presents a new integration framework using an SfM module and a bag of textons method for road scene labeling. By using a multiband image, which consists of a near-infrared and a visible color image, we can generate better discriminative textons than those generated by using only a color image. Our SfM module can accurately estimate the ego motion of the vehicle and reconstruct a 3D structure of the road scene. The bag of textons is computed over local rectangular regions: its size depends on the distance of the textons. Therefore, the 3D bag of textons method can help to effectively recognize the objects of a road scene because it considers the object's 3D structure. For solving the labeling problem, we employ a pairwise conditional random field (CRF) model. The unary potential of the CRF model is affected by SfM results, and the pairwise potential is optimized by the multiband image intensity. Experimental results show that the proposed method can effectively classify the objects in a 2D road scene with 3D structures. The proposed system can revolutionize 3D scene understanding systems used for vehicle environment perception.

Information related to the author
© 2010 by the Information Processing Society of Japan
Previous article Next article