写真測量とリモートセンシング
Online ISSN : 1883-9061
Print ISSN : 0285-5844
ISSN-L : 0285-5844
最新号
選択された号の論文の5件中1~5を表示しています
巻頭言
カメラアイ
原著論文
  • 中畑 光貴, 山本 雄平, 今井 龍一, 神谷 大介, 田中 成典, 中原 匡哉
    2023 年 62 巻 1 号 p. 4-21
    発行日: 2023年
    公開日: 2024/03/01
    ジャーナル フリー

    In traffic census, it is expected to develop image processing technologies for counting number of passing automobiles by analyzing video image. Many counting technologies using deep learning have been proposed. It is difficult to maintain sufficient accuracy because new automobiles are sold year after year. Therefore, it is necessary to maintain high accuracy by re-learning training data of automobiles with new shapes and colors continuously. However, maintenance labor cost is huge because training data have to be created continuously. In this research, technique to recursive active learning for segmentation of automobile parts is proposed and clarified its usefulness.

  • 村井 亮介, 高木 方隆
    2023 年 62 巻 1 号 p. 22-29
    発行日: 2023年
    公開日: 2024/03/01
    ジャーナル フリー

    Observation data using optical cameras are affected by shadows and reflectance characteristics. Several methods have been proposed to correct shadows caused by topography and canopy shape. Bi-directional reflectance distribution factor (BRF) as correction factors due to reflectance characteristics have not yet been established in areal Unmanned Aerial Vehicle (UAV) observation. To compute the BRFs, the Digital Number was obtained from the original image. The angle information was lead from the camera information and position and orientation information by SfM application. BRF analysis was performed on a pixel-by-pixel basis by superimposing the Digital Number and angular information of each original image on the projection plane. Developed method takes advantage of the multidirectional observation feature of UAV to obtain BRFs in an areal manner. The image is then generated considering the BRFs. The ortho image generated from the BRF has a uniform relationship between light source position and viewing angle, minimizing shadow effects at forward-light viewing angles. The uniqueness of this research is that it proposes an inexpensive areal BRF observation method. And it contributes to understand geophysical properties of geological features and to improve the accuracy of optical camera imaging.

研究速報
  • 山谷 祐貴, 木村 篤史, 小林 伸行
    2023 年 62 巻 1 号 p. 30-37
    発行日: 2023年
    公開日: 2024/03/01
    ジャーナル フリー

    We present a crop classification method that uses L-band synthetic aperture radar (SAR) data from PALSAR-2 and C-band SAR data from RADARSAT-2, employing random forests as a classification algorithm, and evaluate it from the perspectives of accuracy. It was calculated that backscatter coefficient (HH, HV and VV) and polarimetric parameters (double bounce, volume scattering, surface scattering, entropy, anisotropy and alpha angle) from RADARSAT-2 and PALSAR-2 quad-polarimetric data. The HH backscatter coefficient was calculated from PALSAR-2 single-polarimetric data. Classification using data from one scene gave better overall accuracy for the RADARSAT-2 quad-polarimetric data than for the PALSAR-2 quad-polarimetric data. For classification using one scene from RADARSAT-2 and one from PALSAR-2, the accuracy was higher than that using only one RADARSAT-2 scene, and the difference was statistically significant. When using data from one scene, it is useful to use both C-band and L-band data. Classification using data from three scenes gave better overall accuracy for the RADARSAT-2 data than for the PALSAR-2 data. However, there was no significant difference in classification between combining three scenes from each of the satellites and using three scenes from only RADARSAT-2. When using data from multiple scenes, only C-band data can be gave sufficient accuracy. However, when using single-polarimetric data, it is not possible to calculate valid variables for classification. Therefore, it is necessary to examine the accuracy of classification using quad-polarimetric data of PALSAR-2.

feedback
Top