A caricature can express an artist's impression of the target face.We propose an interactive facial caricature drawing system "NIGAO".Our aim is to create a system that can be used by anyone to synthesize a caricature with one's impression of the target face.NIGAO has two modes:"selection and adjustment of facial parts mode",in which the user can directly manipulate the location or geometry of facial parts,and "interactive genetic algorithm (IGA) mode",in which he or she can use [facial characteristics from] a group of caricatures to influence her desired caricature by selecting some of them as the dominant genes.Users can switch between modes when needed.We let 15 users imagine the faces of two famous people and synthesize their caricatures using NIGAO.Results of this experiment show that a caricature synthesized using the latter mode after using the former mode reflects the artist's impression of the target face better than caricatures synthesized using only the former mode.These results also demonstrate the effectiveness of NIGAO's interface for synthesizing caricatures.
The complex and unfamiliar hand positions required to produce notes on the guitar mean that a beginner player usually finds learning to play a difficult process.We propose a system that makes use of augmented reality.The way to correctly hold the strings is shown by overlaying virtual hand models and lines onto a guitar via a display.By mirroring their hand movements with the image shown on a visual guidance,the player learns the correct hand positions.Accurate registration between the visual guidance and the guitar is a vital part of this system,so we need to track both the hand positions and the position of the guitar.We also propose a method of tracking the guitar by hybridizing a visual marker and the natural features of the guitar.Our tracking method enabled the system to continually display visual guides at the required position.
We propose a projection-based wearable system,"PALMbit",which enables user's palms to input control events and to output graphic information.The I/O interface for wearable computers depends largely on their practical applications.In the proposed system,we developed both an input and output method for controlling digital appliances; an optical projection onto the user's palm,which is registered by image processing,and a gesture interface based on nger actions.By projecting an image of a virtual remote controller onto the user's palm,the user is able to control digital equipment intuitively using his/her own mental body image and a haptic feedback.Experimental results with a prototype application on the proposed I/O methods show that the naturalness of images projected onto palms is superior to that of HMD-based images.
As society is aging,we need an interface that is appropriate for elderly people. A three-dimensional cyberspace that closely resembles the familiar real world seems promising because it supports the metaphor effect. We evaluated the metaphor effect and operating environments for elementary students,adults,and elderly people. Application of ANOVA (analysis of variance) to the evaluation data clarified the differences between elderly people and the other groups. Testing of an interface based on this approach showed that elderly people can benefit from the application of the three-dimensional cyberspace metaphor. They felt that the interface,which can be configured to provide low degrees of input freedom,was both easy and interesting to use.
We propose using moiré reduction methods for integral videography displays.Integral videography is based on the principles of integral photography and extended real-time video processing.There are two moiré reduction methods that can be used for integral videography displays that have a lens array and a liquid crystal display.The first is color moiré,and the second is intensity moiré.To reduce color moiré,we used an optimized color filter layout in the liquid crystal display.To reduce intensity moiré,we used a defocus method.We also present a design of a viewing area for the integral videography display.To control the viewing area,we changed the lens pitch and shape of the integral videography elemental image.We implemented a 5-inch integral videography display using the proposed methods,and evaluated the integral videography display.
The electroencephalograms were measured to explore a virtual city only via human brain activities.By applying virtual reality technology,the visual evoked potentials,induced by virtual panorama and by virtualsquares,were recorded with three electrodes over the visual cortex.The linear discriminant analysis made about74.2 % of averaged recognition rate in inferring three gaze directions for two subjects.The possibility will be pre-sented to interact with a computer graphics according to the estimated gaze directions.
To create a genuine virtual reality space,it is important to develop a high quality system for sound reproduction and to assess the sound quality. However,there has been no objective assessment scale for the sound quality of a system. In this paper,we investigate an objective assessment scale for a sound reproduction system based on psychological and physiological measurement. As a result,it is shown that an activity of the autonomic nervous system may become an objective assessment scale .
Introducing simulated annealong (SA) into the halftoning of a color image creates a halftoning method that is superior to genetic algorithm (GA).This is because SA doesn't generate discontinuity in the boundary area of GA,and the processing speed of SA is faster than that of GA.Thus,SA is a suitable way of solving the combinational optimization problem.When SA was introduced into the halftonig of the color image both the color image produced and the generation of the color other than white and the black at the gray-scale area (R=G=B) does not exist.
We propose a task model that semi-automatically generates scene-based metadata based on mediaanalysis technology such as audio/visual indexing and natural-language processing to reduce the costs of generat-ing metadata.Our task model can shorten the task time by reusing both the results of media analysis and existingtext information such as program scripts.SceneCabinet,a metadata generation and editing system,can automati-cally extract scene-based metadata from videos.The system extracts meaningful video slices and textual informa-tion such as scene titles,synopses,and keywords using natural-language processing based on the results of speechrecognition and video OCR.Moreover,the system can import program scripts and use them to automaticallyextract keywords.SceneCabinet provides an intuitive user operation interface including a video browser with keyimages that are automatically detected based on scene changes,on-screen text,camerawork,speech,and music.Experiments showed that SceneCabinet could significantly reduce metadata generation costs.
We describe a new way of obtaining a radiation pattern that has equal minimum levels of the lineararray antenna. By using this method we simultaneously control both the minimum level of the sidelobe and thelevel in arbitrary directions in the radiation pattern. The radiation pattern obtained with this method is almostisotropic. The radiation pattern is obtained by controlling only the amplitudes of the currents. The amplitudes ofthe currents are designed using digitally controlled attenuators. As these attenuators can be accurately changed insteps of 0.3 dB or less,the resulting quantization error will affect the radiation pattern. The effects of quantizationerror on the radiation pattern are thus verified.
We propose a new multimedia viewing environment in which users can effectively share their histories of viewing digital content. A viewing history is a record of which parts of the content the user viewed and/or listened to (and how many times) and which parts the user skipped. Since the parts viewed more frequently should reflect their importance, an effectively utilized viewing history can facilitate browsing of the content by new viewers. To demonstrate the feasibility of our proposal, we constructed a digital media viewing environment in which an automatically updated summary of the viewing histories of other users could be downloaded from and one's own viewing history could be uploaded to a database on a server through the Internet. Testing in this environment demonstrated that the summary of the viewing histories of other users enhances a new viewer's browsing of content.
We propose a method to colorize gray scale image by transferring color information from reference images to the target image.By adjusting the eigen values and eigen vectors in eigen color space,a pseudo-colorful image can be produced from a given gray scale image.We experimentally demonstrate the effectiveness of the method.