Identification rates are reduced when the surface of the fingerprint input device is soiled or damaged by contact in conventional systems of fingerprint authentication for personal identification. We evaluated the possibility of noncontact fingerprint authentication to address these problems. A method of extracting the principal lines (valley lines) of a fingerprint from the color picture of a fingertip from non-contact visual input is proposed in this study as an approach to non-contact fingerprint authentication. First, an input image is decomposed into 10 subband images using the 2-D discrete wavelet trans-form three times. Next, the edge component is separated from high frequency subband images. Then, the principle lines are extracted from the combined Y and Z components of an XYZ color system. The experimental result revealed the usefulness of the proposed technique.
A lossless implementation technique of Motion JPEG2000 (MJP2) integrated with invertible deinterlacing is presented in this paper. The invertible deinterlacing, which has been previously published, is a technique that suppresses the comb-tooth artifacts caused by field interleaving of interlaced videos and also guarantees the recovery of the original through the inverse process. The comb-tooth artifacts affect the performance of scalable intraframe-based codecs such as MJP2 and, thus we have suggested a scenario for applying this invertible deinterlacing as a prefilter to such a scalable codec since it is possible to suppress flickering due to the artifacts for low and middle bit-rate decoding and to remove the filtering effect for high quality decoding. In this work, we further propose an exact lossless implementation technique by integrating the deinterlacer into MJP2 codecs and modifying the header information for appropriate standard decoding. We demonstrate some simulation results to show that the comb-tooth suppression capability can be kept at low bitrates with standard MJP2 decoders.Furthermore, some experimental results show that the overhead of our proposed method is less than 2.0% compared with normal field interleaving in video sequence 'Football'.
We often use simulcast video data to provide multi-quality video over broadband networks. However, this is not superior from the viewpoint of efficient transmission and data storage. Moreover, scalable coding was selected as an MPEG standard, but the fact that the scaled bitstream cannot be reconstructed by a generic decoder is a problem. Previously, proposed updatable scalability adapted generic decoders in simulcast. Therefore, in this paper, we propose a video cache and delivery method with scalable architecture. Through simulation experiments, we found that scalable architecture is better in user access models than simulcast, which is explained in this paper.
This paper presents a personal identification system based on chain coded isodensity maps. This face identification system uses the isodensity lines in which the properties of a face are well reflected and it obtains a high discrimination accuracy. Isodensity maps are line pictures; therefore they can be easily transformed into a chain code. By using such coded isodensity lines, an improvement in the processing performance can be expected in terms of processing time and memory size required. Experimental results prove that the processing time was decreased to approximately one seventh (15%) compared with the conventional method.
Image compression algorithms are evaluated on several performance measures, two of which are the total amount of compressed image data and image quality. In this paper, we deal with an image compression algorithm that divides an image domain into some smaller triangles. Although the triangulations will cause image quality to deteriorate in general, the relationship between the triangles is not trivial. We, therefore, propose an efficient algorithm for counting the number of the internal vertices in a triangulated domain, to investigate their relationship. It is believed that the solution will contribute to further work on improving the process of image compression.
We developed new method for setting the threshold value on a quasi-chroma histogram to extract facial areas. Our method is based on a method used for forecasting stock market futures, such as the dead cross and the golden cross. Using two averaged histograms that are smoothed with a short interval and a long interval, the dead cross and the golden cross are obtained as the cross points of these two histograms. The threshold value is then set to the minimum point of the frequency of quasi-chroma elements between the dead cross and the golden cross.
There is widespread interest in wide display screens such as CAVE, which have displays with a field of view of over 180 degrees, covering the whole field of vision. The design of the display screen is essential in achieving a very accurate sense of reality in virtual environments (VEs) because an image can provide users with a wealth of information. The response of humans to the visual stimulus in VEs is percieved to be one of the more promising potential estimation methods to measure the presence sense and immersion of a VE system. This study aims to analyze the body sway of the subjects, caused by the vection, in a VE system to verify the essential area and proper size of the display necessary for the efficient performance of presence sense when conditions like display areas are changed. It has been demonstrated that over FOV100 and bottom side projections play an important role in making the sense of presence in VE even at the combination display conditions. Also, the body sway has a close relationship with the subjective immersion status of subjects by the self-report method.
The composite images in virtual studios as used by broadcast stations require that unnatural sensations be minimized real time characteristics be maintained. If the lighting conditions of captured images do not match those of computer graphics (CG), however, the composite image will appear unnatural to viewers. In response to this problem, we used projectors to reproduce the CG lighting environment in real space with the aim of reducing this unnatural feeling caused by unmatched lighting conditions. At the same time, chroma keying, the conventional video composite technique, does not allow for the setting of any lighting conditions on the real space side, thereby preventing the proposed light big reproduction technique from being applied to a virtual studio. For this reason, we developed a new video composite technique. The technique that uses transmission retro reflective material, an infrared camera placed at a conjugate position with respect to a color camera, and infrared lighting to remove the constraint conditions associated with the chroma key technique. In this paper, we formally propose this new technique and demonstrate its effectiveness.
CaS : Cu, F thin-films have been prepared with the aim of development a blue-emitting thin-film electroluminescence (TFEL) device with high luminance and good chromaticity. The dependence of structural and photo-luminescent (PL) properties of CaS : Cu, F thin-films prepared by electron beam on annealing temperature and time after the deposition was investigated. Crystallinity and PL intensity of the films were improved significantly by annealing at temperatures higher than 800°C for up to 10 min. The improvement showed a tendency toward saturation for annealing at longer than 10 min. The above results indicate that annealing for around 10 min is suitable. The CaS : Cu, F TFEL device was fabricated on an Si substrate using SiO2 and Y2O3 films as insulator layers, and ZnS films as buffer layers. The device annealed at 850°C for 10 min with 0.3 at% of Cu+ was obtained a blue emission with an obvious peak of 425 nm, CIE coordinates of x=0.217, y=0.223.
Demand is high for the development of a super-high-resolution image system that exceeds HDTV quality for high-reality communication services in the future. In this paper, we propose a 60-frame/s progressive 8-mega-pixel super-high-resolution image (2160/60p) interface using 8 HD-SDI lines. With our interface, an 8-mega-pixel super-high-resolution image is divided into four rectangles, each composed of two HD-SDI lines (1080/60i×2). Our design moves the last line of the upper region to the first line of the lower region. As a result, there is no seam between upper and lower image regions. Moreover, a super-high-resolution image can be created more efficiently using HDTV equipment when using this interface. In this paper, we demonstrate an example of a super-high-resolution image system using our interface.
Subjects were placed in complete darkness and asked to elicit a saccade from a repeatedly flashing stimulus (500 Hz) in the center of their visual field to a saccade target located 10 deg to the right of the stimulus. The stimulus produced a phantom flash array in the direction of the saccade. The stimulus was extinguished or was changed in color after the onset of the saccade. The subjects reported the perceived position of the rightmost phantom flash of the array. The time course of the perceived positions suggested that the degree of mislocalization of each phantom flash depended on the time from saccade onset in the case where stimulus flashing started before saccade onset or from the beginning of stimulus flashing. Our previous study suggested that the degree of mislocalization depends on the attended position during the saccade. Considering the previous study, the results in the present study suggested that the rightmost phantom flash position indicates the degree of the attention shift from saccade onset or appearance of the flashing stimulus until extinction of the stimulus.