Autonomous navigation is an essential technique for distant small-body exploration to update the spacecraft state and control the spacecraft position relative to the asteroid. This paper describes a point cloud-based navigation method for self-localization estimation by matching asteroid point cloud data and asteroid images using Hough transformation. Point cloud data is a set of vertices of a shape model and is a sparse shape model. Although conventional image-based navigation is necessary for dense shape model or high-resolution images, in the Hayabusa2 mission, accurate navigation was achieved during descent and landing by manually matching between the asteroid image and about thousands of points. Therefore, this paper proposes a matching method for sparse point clouds and images that manually reproduces matching between points and images using Hough transform. For this study, simulations were performed on the number of points cloud varies from 1,000 to 10,000 points under various sunlight conditions. The proposed method achieves a matching accuracy below 1 [px] at 4,050 points. Moreover, below 2,000 points, the estimation accuracy is superior to the conventional method.
抄録全体を表示