Journal of the Japan society of photogrammetry and remote sensing
Online ISSN : 1883-9061
Print ISSN : 0285-5844
ISSN-L : 0285-5844
Volume 60, Issue 6
Displaying 1-14 of 14 articles from this issue
Preface
Original Papers
  • Junko SUGIMORI, Ryouhei YAMAMOTO, Hirofumi CHIKATSU
    2021 Volume 60 Issue 6 Pages 336-343
    Published: 2021
    Released on J-STAGE: January 01, 2023
    JOURNAL FREE ACCESS

    Various kinds of cameras have been utilizing as onboard cameras in the construction of Intelligent Transport Systems. In recently, utilization of the high sensitivity consumer grade digital cameras at night is attracting attention from the viewpoint of avoiding the effects of sunlight and congestion of people and cars. However, due to the image taken by the onboard cameras is a perspective projection image, the image is projected small at the far from car and the greater the influence of the lens distortion the farther from the image center. In order to avoid the issues, lower part of projection image or a bird's-eye view image is used, but the imaging of the bonnet part due to the vehicle model and the tilt of the camera becomes a new issue. Furthermore, a bird's-eye view image at night has to be trimmed to coincide with irradiation range since the irradiation distance and range for headlight are limited. On the other hand, feature quantities such as vanishing points and feature points on the lane have been used for projective transformation from a perspective projection image to a bird's-eye view image, but the projective transformation based on the feature quantity is an ill-posed problem.

    Therefore, this paper discusses the quantitative trimming method based on the projective transformation that does not depend on the feature quantities and coincide with irradiation range of headlight too.

    Download PDF (561K)
  • Shinya ODAGAWA, Akinobu NAKAJIMA, Kuniyuki SAITOH
    2021 Volume 60 Issue 6 Pages 344-349
    Published: 2021
    Released on J-STAGE: January 01, 2023
    JOURNAL FREE ACCESS

    This paper describes a laboratory test result of paddy yellowing rate estimation technique for smart agriculture. The paddy yellowing rate is important for determining the proper time of harvesting. It is required to measure the paddy yellowing rate in the field by image analysis. Since, this technological development is extremely difficult, this laboratory test was conducted as a preliminary step. The study area is Hozaki area in Akaiwa city and Shinogoze area in Okayama city, Okayama prefecture. Sampling and laboratory tests were conducted on 7th October in 2019 and 9th October in 2020, these times are 1 or 2 weeks before harvest. The number of sampling points is 47, three ears were collected from each point. The ear was photographed with Olympus digital camera, TG-5. Image data converted from raw to TIFF data. The paddy yellowing rate was calculated by dividing the number of blue seeds, means immature, by the total seeds. The explanatory variables were normalized difference vegetation index that combined two bands from blue, green and red band. As a result of the analysis using 2019 data, a linear regression model with the highest correlation coefficient (R ; 0.77) could be built by the normalized difference vegetation index using the combination of red and green band. An estimation model using 2020 data could not be constructed. It is probable that the ripening of this year was more advanced than previous year. From the result, it could be considered that the proposed method can be applied before the late ripening period.

    Download PDF (428K)
  • Ryuji TAKADA, Tetsuya KANAGAWA, Shigeaki OKA, Naoaki KAWAMURA, Shunich ...
    2021 Volume 60 Issue 6 Pages 350-353
    Published: 2021
    Released on J-STAGE: January 01, 2023
    JOURNAL FREE ACCESS

    A quick assessment of landslide damage in mountainous areas after a disaster occurs is important for planning of the disaster recovery action. For this assessment, deep learning AI is thought to be an effective method to grasp quickly the state after a disaster. The deep learning for image classification, however, needs a large amount of training and test data. To overcome this problem, a transfer learning is thought to be effective, especially, when much data is not available. In this paper, we compared between each result of four major pretrained CNN architectures (AlexNet, GoogLeNet, VGG-16, SqueezeNet) to be transfer learned, using pre- and post-disaster visible image information. At the result, GoogLeNet showed best collaspe recall rate 81.4%, and accuracy 85.7%. Also, VGG-16 showed the best non-collapse recall rate 93.1%. The remarkable point of this comparison is that collapse areas, which were classified as non-collapse areas by all of four model, belonged to ground involved large amount of sand and clods.

    Download PDF (253K)
feedback
Top