抄録
This paper introduces an innovative technique for the automatic extraction of discontinuities in the digital 3D model of a tunnel face. The discontinuity areas are identified by segmenting the projected 2D images of the 3D tunnel face model using a deep learning model called U-Net. The U-Net model integrates various input features including the projected RGB, depth map, and local surface properties-based (i.e., normal vector and curvature) images to effectively segment areas of discontinuity in the images. The segmentation results are subsequently projected back onto the 3D model using depth maps and projection matrices, ensuring an accurate representation of the location and extent of discontinuities within the 3D space.