Simultaneous Localization and Mapping (SLAM) is a framework to estimate sensor position and point clouds simultaneously in unknown environments. This article describes the theory of SLAM through viewpoints of basic techniques, algorithms, and sensors, such as LiDAR SLAM, Depth SLAM, and Visual SLAM.
The Geospatial Information Authority of Japan has created and released the “Public Survey Manual Using LidarSLAM Technology”. This is a manual for surveying a relatively narrow range using a hand-held or wearable laser scanner.
This paper describes our work on an autonomous mobile robot developed through the Tsukuba Challenge. The autonomous mobile robot uses mobile 3D measurement technology to generate an environmental map and adds information to the map for position estimation and path planning. Specifically, this report provides an overview of the autonomous mobile robot, its hardware configuration, environmental map generation method, path graph creation, path planning, and location estimation.
Automated vehicle technology is rapidly growing with the aim of practical applications as the next generation of transportation systems. In order to achieve fully-automated driving on public roads, a common approach is to implement digital map based robust perception and decision-making systems. It is a challenging task of self-localization to stably estimate ego-vehicle position with decimeter-level accuracy even under adverse conditions such as rain and snow. This paper describes overviews of the digital map generation based on SLAM technology and self-localization technology based on map matching for robust urban automated driving.
Unmanned/Unpiloted aerial vehicles (UAVs) equipped with laser scanners have been widely used for various purposes, such as in construction sites, agriculture, forestry, and disaster management, as they can obtain high density point clouds with millimeter to centimeter scale accuracy. Recently, low and high performance products that use direct georeferencing devices integrated with laser scanners, or laser scanners with Simultaneous Localization and Mapping (SLAM) techniques have been available. Moreover, there have been numerous reports regarding the various applications of UAVs equipped with laser scanners ; however, most of these reports only discuss the results using UAVs as measuring devices. Therefore, to understand the functioning of UAVs equipped with laser scanners, we investigated the accuracy aspects of the survey grade laser scanner unit from the viewpoint of photogrammetry. We evaluated the performance of the point clouds with reflection intensities acquired by Applanix AP40 GNSS/IMU and RIEGL VUX-1HA laser scanner at a UAV test site provided by JSPRS. We present the theoretical values obtained using the observation equations and results of the accuracy aspects of the acquired data.
The potentials of multitemporal SAR data acquired from Sentinel-1 data have been reported for forest mapping and managing forest. However, SAR data have difficulty in visual interpretation due to speckle noise and geometric distortion caused by the distance-dependence along the range axis and the characteristics of radar signal wavelengths. Recently, the advantages of Generative Adversarial Networks (GANs) have been reported in image-to-image translation and there could be also effective methods for SAR-Optical image translation. In the current study, comparisons among CycleGAN, pix2pix, pix2pixHD and feature-guided SAR-to-optical image translation (FGGAN) were performed using Sentinel-1 and 2 data acquired in the forests. As a result, FGGAN was the best technique, achieving the structural similarity values of 0.664, 0.708 and 0.725 for red, green and blue bands, respectively.