Paddy fields can be effectively used and conserved by assessing the current situation, e.g. the distribution of abandoned agricultural land on a regional scale, using satellite data. However, the estimation of the area covered by paddy fields via satellite images is vulnerable to misclassification, especially in mountainous areas because the radiance locally decreases owing to the slope and can result in incorrect identification. Paddy fields emit small amounts of radiance because they are covered with water and are often misclassified as other forms of land cover whose radiances decrease because of the steep terrain. In this study, we develop a method to directly correct the classification results of the images obtained from the Landsat TM using the characteristics of paddy terraces in the mountain area. We then apply this method to the Honghe Hani Rice Terraces in Yunnan Province, China. The results show that the correction method significantly improves the accuracy of classifying paddy fields, and the kappa coefficient of land cover classification is same or more than that of ATCOR3, which is widely used for terrain correction for paddy fields.
When a large-scale natural disaster occurs, it is important to grasp an overall picture of the damage that the area has suffered as soon as possible. In addition, maps and images that show the status and location of the damage in the area are necessary to support the efficiency of emergency relief services such as firefighting, volunteers and other groups. However; the wider an affected area, the more time it will take to confirm the damage, if such efforts and processes rely on human power. This challenge should be addressed technologically by developing a method capable of analyzing an affected area within a short period of time so as to provide useful information for emergency relief and rescue operations. The final goal of this study is to provide data for supporting emergency relief efforts in a disaster affected area by locating damaged buildings shortly after the disaster. In this study, the importance of time in emergency situations is prioritized by designing a method that only uses a single satellite image of an affected area, eliminating the use of complex algorithms and auxiliary data. The uniqueness of our method lies in the application of object-based region segmentation to images and the use of features of objects obtained from texture, hierarchical and other information in order to extract damaged buildings. Out of 26 features resulting from the analysis of objects, we found one feature and three combinations of two different features that are effective in extracting damaged buildings, such as Rectangular fit, Homogeneity, Number of sub-objects/Area, and Length of longest of edge/Area.
On many land cover maps using optical satellite images, parts of mountain shadows (shadows caused by terrain) are often misclassified as water areas. This is because a target in a shadow looks darker and bluer, due to illumination by scattered sunlight only. In the present study, we developed a land cover classification method that is hardly affected by mountain shadows, using multi-temporal optical satellite images. Experiments were carried out using ALOS/AVNIR-2 images (after atmospheric correction, orthorectification correction, and slope correction; years 2006 to 2011; 63 time periods), from 36°N to 37°N and 140°E to 141°E (1 degree×1 degree). First, for each image at each time period, the likelihood of each land cover category was estimated by kernel density estimation (KDE). Next, the positions of mountain shadows were estimated from the elevation data (AW3D DSM) for each period. The parts of the mountain shadows were then adjusted to reduce differences in likelihood among categories. The contributions of mountain shadows were thus decreased and the contributions of non-shadowed areas increased. As a result, most erroneous classification, which seems to arise due to the influence of mountain shadows, disappeared, particularly in the case of fake waters appearing on the north face of the mountains on the land cover map created by integrating the likelihoods of all the times.
Agricultural remote sensing is an important issue, as techniques related to smart/precision agriculture could improve the quality of rice. This study aimed to explore models that consider nitrogen content in the canopy, and the transport/accumulation of assimilation products in grain, to estimate the protein content of brown rice based on UAV remote sensing and meteorological observation data.
The conclusions of this study were as follows: (1) Examination of the optimum observation time for protein estimation found that the normalized difference vegetation index (NDVI) at the heading stage was most correlated with protein content (PC). NDVI at day 30 after heading stage was second highest. Both observation times saw a small impact of fluctuation of the growing stage, due to the difference in rice planting time. (2) As a result of integration of NDVI at the heading stage and temperature data at the grain-filling stage, in Koshihikari, average temperature after 5-20 days from heading stage was most correlated with PC. In Fusaotome and Fosakogane, average temperature after 0-20 days from heading stage was most correlated with PC. (3) In this study, higher temperature at the grain-filling stage decreased PC. On the other hand, the influence of temperature during grain-filling stage on PC was much smaller than that of NDVI (nitrogen condition) on PC.
At northern high latitudes, warming trends have been accelerating, and it is important to understand how the terrestrial ecosystems in these regions respond to such climate change. Satellite-based monitoring of vegetation parameters such as the leaf area index (LAI) provides the diagnostic characteristics for terrestrial vegetation dynamics. Thus, an effort to assure data quality through a comparison with ground-based datasets is crucial. The objective of this study is to evaluate LAI from gap fraction measurements under clear and cloudy sky conditions. We performed gap fraction measurements using plant canopy analyzers at four spruce forest sites in interior Alaska, USA, in September to October 2011 and August 2016. The measured gap fraction was then used to compute the LAI. After correcting the scattering radiation effect on the gap fraction, we obtained an LAI (Lm) of 1.00 to 1.75. When the woody area and shoot level clumping effects were taken into account, the green LAI was estimated to range from 1.18 to 2.33. The LAIs estimated after the scattering correction were closer to the LAIs obtained in cloudy sky conditions, suggesting that the LAI obtained in clear sky conditions can be considered to have the same accuracy as that obtained in cloudy sky conditions.
We propose a method of acceleration for remote-sensing image analyses with the use of graphics processing unit (GPU). We applied the proposed GPU parallel processing methods to both filtering and correlation processes. We observed that the GPU acceleration increased with the moving window size for the convolution filter, because the convolution filter does not use any calculation arrays in the GPU shared memory. Since the median filter uses a sorting array in the shared memory, the acceleration reached the max in window size 9, and then decreased. We also investigated GPU parallel processing for correlations in both spatial and frequency domains. The area correlation method is a process using moving windows in the spatial domain, and it is possible to speed it up similar to the filtering process. As an example of frequency domain correlation methods, we investigated whether we could accelerate the phase-only correlation (POC). We developed a method to avoid the capacity constraint of GPU shared memory. By processing within the correlation window line by line, GPU could accelerate the POC even for the larger correlation window size exceeding 64. Our investigation of the GPU accelerations for filtering and correlation processes thus revealed that the important points are the reduction of the access load to global memory and avoiding the constraints of the shared memory size in the GPU.