Recently, large amounts of photographs are able to be obtained by large format digital cameras easily and at a low cost compared to film type cameras. As a result, the large format digital cameras in the field of aerial photograph surveying have become wide spread in the industry. Consequently, multi-view photographs that are taken from various view-points of the same area of the ground are frequently duplicated. For the improved efficiency of the multi-matching theory, we propose a new efficient feature searching method using multi-view images. By using the independent rectification images as to obtain the projection conversion to vertical direction, it will avoid the complicated 3-dimensional ray tracing problems. Therefore, it means that it is possible to realize the multi-images matching only using magnification from FOE (Focus of Expansion) of each independent rectification image as a 2-dimensional image conversion. This paper presents the geometrical specifications that make the best use of image magnification to solve the problems of ordinary multi-matching theory, such as, “decrease of matching score depends on the flight attitude”, “occurrence of lacking gap depends on the occlusion”and“increment of repetitive operation for calculation costs”. This independent rectification method proves to be valid for these problems, and the effectiveness is verified with the experimentation using multi-view images of actual aerial photographs.
The recent increase in number of pixels of images acquired by a digital camera encourages one to utilize it for image measurement. A large number of pixels of the image bring difficulties in dealing with it. Therefore, lossy image compression is going to be required. Almost all digital cameras have the built-in function of lossy JPEG compression. Since there are few reports on evaluation of image matching accuracy using a pair of lossy compressed images acquired by a digital camera, we decided to investigate effects of lossy JPEG compression executed in a digital camera on the accuracy of image matching. This paper reports the preparatory experiment conducted in order to evaluate lossy JPEG compression quantitatively by using 54 diverse images. Two compression parameter sets of a downsampling ratio, a quantization table, and a Huffman code table utilized in Canon EOS 20D were investigated. The experiment results demonstrate that the compression ratio is correlated to grey level difference vector measures. Furthermore, the experiment results suggest that the degradation of quality of images reconstructed from images compressed in a digital camera is so small that it may have no effect on the accuracy of image matching.
In order to validate the availability of rbNDVI (red and blue Normalized Difference Vegetation Index) for chlorophyll estimation, we conducted estimation of leaf chlorophyll concentration at heading to anthesis stage which highly correlates with grain protein concentration using ground-based and aerial hyperspectral data at agriculture fields in Gifu prefecture. In the ground measurement, rbNDVI showed high correlation (γ2=0.731) with leaf chlorophyll concentration compared to widely used NDVI (Normalized Difference Vegetation Index; γ2=0.231) and TCARI/OSAVI (Transformed Chlorophyll Absorption in Reflectance Index/Optimized Soil-Adjusted Vegetation Index; γ2=0.333), and the estimate equation is able to applied in the different study site (Root Mean Square Error: RMSE=2.54μg cm-2) . In the aerial scale, rbNDVI also showed good result in chlorophyll estimation (2006: γ2=0.427, 2007: γ2=0.447, RMSE=4.18μg cm-2) using atmospheric correction of 6S code and reflectance matching method. According to these results, the spatial distribution map of chlorophyll concentration was created. The spatial distribution map of chlorophyll concentration using rbNDVI may provide the useful information for additional fertilization aim to the improvement of the grain protein concentration over the large area.
We propose the new observation method of sky conditions instead of the existing visual observation. In this paper, we describe mainly how to take the whole sky imageries, the methodology how to discriminate sun, clouds and blue sky areas on the imageries, and the classification of sky conditions which are taken account of weather, sun appearance, cloud existence and sky brightness. As for the discrimination of sky conditions, we use Sky Index (SI) and Brightness Index (BI) calculated from whole sky imageries. SI shows the extent of the blueness and grayscale and BI indicates the extent of the brightness. Sun, cloud and blue sky areas are divided by SI and BI. Moreover, we introduce four sky parameters which can discriminate sun appear or not, fraction of cloud area (cloud cover), the extent of brightness in whole area and the extent of blueness in blue sky area on the whole sky images. Then, multi-temporal whole sky imageries are classified into each item of various sky conditions. The results of sky conditions observation are compared with the data observed at the local meteorological observatory.