Journal of The Remote Sensing Society of Japan
Online ISSN : 1883-1184
Print ISSN : 0289-7911
ISSN-L : 0289-7911
Volume 43, Issue 2
Displaying 1-7 of 7 articles from this issue
Regular Paper
  • Haruka Izumisawa, Sota Hirayama, Yosei Mizukami, Kenlo Nishida Nasahar ...
    2023 Volume 43 Issue 2 Pages 73-85
    Published: 2023
    Released on J-STAGE: June 22, 2023
    Advance online publication: May 23, 2023
    JOURNAL FREE ACCESS

     Okinawa Island faces a number of environmental problems, such as red soil runoff and biodiversity degradation. Land use/land cover (LULC) is a major factor contributing to these problems, as are changes in LULC. However, there are few up-to-date LULC maps for Okinawa Island with sufficient accuracy or details in classification categories. In this study, we developed an integrated localization method in which new local LULC maps with a localized classification category system were created by taking advantage of existing LULC maps (polygon-based maps and probabilistic layers from AI-based maps) and local expert knowledge. Using this method, we created LULC maps with 13 categories including major categories found on Okinawa Island such as “Agricultural greenhouse,” “Sugarcane,” “Pineapple” and “Mangrove forest.” We used Sentinel-1 and -2 satellite images, Google’s Dynamic World (DW) probability maps, JAXA’s HRLULC version 21.11 (JAXA-v21.11) probability maps, and MAFF’s Fude Polygon maps as input data for classification. By combining all these input data as a single feature space and, applying Random Forests classification with training data collected by ourselves, we obtained an updated LULC map (2020 as the reference year, 10-m spatial resolution) with a higher overall accuracy (OA; 88.45±1.10 %) than other existing maps. To find the contribution of each set of input data, we tested different choices and combinations of the input data. OA was the lowest (83.18±1.29 %) in the case of Sentinel satellite images only (LC_S), however, incorporating DW probability maps (LC_S&D) or JAXA-v21.11 probability maps (LC_S&J) increased the OA to 85.56±1.21 % and 84.20±1.26 %, respectively. Incorporating Fude Polygon maps (LC_S&F) increased the OA to 86.28±1.19 %. The product developed in this study has been released on JAXA’s “High-resolution Land Use Land Cover Maps“ website.

    Download PDF (7327K)
Engineering Report
  • Jihyun Lim, Mitsunori Ishihara, Hiroshi Tsunematsu, Ryo Sugiura
    2023 Volume 43 Issue 2 Pages 86-96
    Published: 2023
    Released on J-STAGE: June 22, 2023
    Advance online publication: May 31, 2023
    JOURNAL FREE ACCESS

     Multispectral cameras for drone sensing have multiple image sensors, each of which has a different viewing angle and focal point. Aligning images of all bands (registration) is an essential step before using the images of two or more bands for an analysis such as calculation of the normalized difference vegetation index. We suggest use of a feature-based registration technique that uses the OpenCV open-source computer vision library to align multispectral images simply and inexpensively. This method also corrects for the lens effects including distortion, converts pixel values on images from digital numbers to values of radiance and reflectance, and exports these results as a geotagged image for further processing. In the process of multispectral image registration, one band of the images is used as the reference image, and the others are set as sensed images. The sensed images are then aligned to the reference image by using 3×3 homography arrays, which are estimated using OpenCV from the corresponding keypoints between the reference image and each sensed image. We have investigated the method with multiple sets of images taken by the three major drone-mountable multispectral cameras — MicaSense RedEdge-3, Parrot Sequoia+, and DJI P4 Multispectral — with five algorithms for the feature-detector-descriptor — AKAZE, SIFT, SURF, BRISK, and ORB — and for all bands as the reference. We found that the green band was best for the reference. The success rates of AKAZE and SIFT were similar for the feature-detector-descriptor algorithm and exceeded 89 % for all image sets. However, the processing time was shorter for SIFT than AKAZE, especially for large images. We describe reasonable methods for derivation of reflectance from the images of each camera.

    Download PDF (7164K)
Explanation
Book Reviews
Lecture Series
Laboratory Introduction
feedback
Top