A pan-spatial dynamic virtual auditory display (Pan-spatial DVAD) has been developed, which can present virtual sound sources located at any position. The pan-spatial DVAD is an integrated system of the DVAD and an HRTF (Head-Related Transfer Function) calculation server. The DVAD detects the listener's head motion using a head tracking device and updates the HRTFs in response to the head motion. This enables a presentation of a highly realistic auditory space integrated with the listener's motor sensation. Our DVAD operates on Windows XP and its total system latency is about 50 ms, which is practically small enough for a localization experiment tool or actual applications. The HRTF calculation server numerically calculates HRTFs for sound sources located at any position around the head, based on the boundary element method. Via TCP/IP, the HRTF calculation server receives a source position in a relative coordinate system from DVAD system and quickly provides the corresponding HRTFs. The pan-spatial DVAD can present virtual sound sources located at any position since the HRTF calculation server provides the HRTF for every conceivable source position, whereas the conventional VADs and DVADs implementing an HRTF database can present only limited sound source positions due to a limited number of HRTFs.
Mechanization of diving operations is necessary for safer and more efficient underwater construction works in port areas. Teleoperated underwater construction machines are under developing. The detection system as well as the operation one is required for these machines. The problem is that the conventional TV camera system is useless in water because of turbidity. Haptic information is introduced to help controlling construction machines substituting for visual information. We examined the ability of machine experimentally on a real sea area. Experimental results show that accuracy of leveling are +17cm±8cm, and the construction efficiency of the system is 25.6 m^2/h. It is proved that the concept of authors' proposed system is enough effective for the operation in such turbid water as visual information is not available, and it has sufficient performance for actual construction works.
The skin-color extraction at high accuracy and with high speed is indispensable for user interfaces where the human gestures are used as the input and cues to a system. The conventional color systems for skin-color extraction may often lose the accuracy due to the change of light conditions. When the accuracy is satisfying, the calculating speed may slow down. In this paper, therefore, the authors propose a new color system acquired by two vectors of individual persons' color distribution, where the skin-color area is extracted by a rectangle which optimally fits the skin-color ellipsoid. And the ranges in each axis are simply specified in extraction by the rectangle. This method can decrease the extraction error, as well as drastically reduce the calculation amount. The experimental results showed that the skin-color system proposed in the paper can extract the skin area at the speed of 149 fps, and at the rate of 94 % of correction, or more. Our system can also extract the human nail area.
We propose a method for reducing out-of-focus blur caused by projector projection. In this method, we estimate the Point-Spread-Function (PSF) of the out-of-focus blur in the image projected onto the screen by comparing the screen image captured by a camera with the original image projected by the projector. According to the estimated PSF, the projected image is pre-corrected, so that the screen image can be deblurred. Experimental results show that our method can reduce out-of-focus projection blur.
Several studies have examined the transfer effects of playing action video games. Recently, some researchers have proposed auditory virtual reality games with three-dimensional virtual auditory display. These studies were intended to apply auditory virtual reality games to the auditory education of visually impaired people. However, few studies have investigated the transfer effects of playing auditory games. In this paper, we introduce previous studies that investigated transfer effects of playing virtual three-dimensional auditory games. Moreover, we proposed new perspectives and future assignments of auditory virtual reality games.
We believe that we can support the research activities of genome science researchers by introducing virtual reality and human interface technology in their field, to support them in analyzing large amount of data. The purpose of this paper are to propose an concept of interactive visualization environment that meets genome analysis requirements, and to discuss about visualization of genome data analysis. Information visualization in general can be categorized as virtual VR. In this paper, virtual reality methodologies that enable users to manipulate, set parameters, and analyze the genome data, are proposed and evaluated. The proposed system is used in real genome science analysis.
It is very costly to conduct experiments to test smart environments installed in large-scale public spaces (e.g. airports and railway stations). As a tool to examine the design of such experiments beforehand, we developed a multiagent simulator integrating a large number of software agents and humans into a crowd that can produce complex group behaviors. In the testing of our emergency guidance system installed in a central railway station, human-like agents generated virtual emergency evacuations, which the subjects experienced via virtual reality or augmented reality. Consequently, we could hypothesize that our system is superior in trustworthiness, usefulness, and so on.
We propose a system to build haptic presentation environments automatically into Flash media. In this system, structure information is analyzed from the binary data of Flash contents based on the conversion of binary data to dynamic XML using byte offset calculation. The object information required to generate haptic information is acquired. Then the movement parameters of the object extracted by ActiveX technology are updated, and the haptic effect is generated. This paper discusses the framework of the automatic generation, and particularly, the execution performance and haptic effect generated at the time of object extraction.
In this research, our aim is creation of the work support system that allows users, that they can work with awareness of remote user's interaction based on each real object. The deformation is displayed on his or her real object as virtual model in real time, if the other user changes the shape of his or her object in remote place. Users can aware it in three dimensions and intuitively based on their real objects, and they can share the statement of real objects by lapping the real objects over the virtual models. Work-support with treating real objects is actualized by doing these process one after another. Then we implemented a prototype system that actulalize our concept and evaluated it. As a result, it proved that it is possible to share the statement of real objects in remote places by lapping them over the virtual models.
This paper proposes a novel visualization technique using omnidirectional images and geometric models for a remote robot operation interface. In a normal situation, telepresence with an egocentric-view is provided using high resolution omnidirectional video on a hemispherical screen. Backward images are shown on top of the screen so that a 360 degree of view is always in user's view. In addition, a miniaturized 3D model is overlaid on the screen providing an exocentric view, which is a common technique in virtual reality. Additional 3D information can also be overlaid onto the video image such as passable area and roughness of the terrain in a manner of video see-through augmented reality. Two functions, path-drawing and path-preview are provided to interact with the 3D environment through the live video. Path-drawing allows to plan a robot's path by simply specifying 3D points on the path on screen. Path-preview provides a realistic image sequence seen from the planned path using a texture-mapped 3D geometric model in a manner of virtualized reality. In this way, our technique allows the user to recognize the remote environment and navigate the robot intuitively. Experimental results show that the proposed visualization technique had advantage over existing methods.
In this paper, we propose a novel tele-presence system for collaborative work. Our tele-presence technique has advantages of both model-based and image-based approaches. In the technique, a virtual environment representing a remote place is presented by using projection texture mapping of live video images captured by a wearable camera onto a 3D geometry of the place acquired in advance. The virtual environment can be rendered from an arbitrary viewpoint, while its texture is dynamically updated according to the camera motion. The observer can then observe the remote place in cooperation with the remote worker. A feasibility study is also given which subjectively confirmed the effectiveness of the remote instruction mechanism.
Understanding human path-selection behavior would enable us to predict or control the flow of walking people, and also be useful for designing a safety living space. However, even in a simple situation in which more than one path can lead to a destination, it is difficult to predict which of paths one would choose. In the present study, we experimentally examined the determination process for selecting a walking-path. Experiments were performed in an immersive virtual reality system 'CAVE'. In the experiments, subjects were required to select a right or left path each of which was formed by three poles. Results indicate that subjects did not select their path by correctly estimated walking distance, but selected by combining a roughly estimated walking distance and a turning angle.
Mixed-Reality, a technique to superimpose CG images on the real scene, is expected as a new way of information visualization and presentation. We take up "Disaster Prevention Studies and Disaster Measures" as new application field of MR technology, and develop a system of visualizing simulations of the disasters that have occurred previously or have a possibility for occurring in the future. First, we developed a system of superimposing CG images on a diorama (geographical miniature model). Next, we implemented the application to observe the flood or fire simulation results or subsurface construction, and confirm the effectiveness. We also built an application to observe the history of fire disaster on the PC or MR space. Furthermore, we offer another suggestion that is the way to expand this system to mobile type to use it outdoors.
This study suggests a new force display system based on a perceptual illusion. The proposed display system will exert a pseudo force by placing pressure on the wrist of the operator. Air pressure will be used to generate this pseudo force sensation. The system is adequate as a wearable human interface because it is light, soft and safe. This paper discusses this system in which an operator can feel weight while manipulating virtual objects, and presents the experimental results of the haptic discrimination threshold test.
This paper proposes vertex-preserving cutting methods of finite element models for interactive soft tissue simulation with visual and haptic feedback. Compared to related study, our methods describe both 3D geometry and physics of soft tissue incision while not subdividing model elements. This approach does not change the number of vertices, which avoids increase of computation time and allows fast and stable update of stiffness matrix. Some experiments on general-purpose PC confirmed that valid cutting representation is possible and the shapes of finite element elastic models are similar to those of real elastic materials. Also, the algorithms are about 5 times faster than conventional approach and real-time simulation is achieved.
In this paper, we introduce a virtual pop-up book system using Augmented Reality technology. This system displays 3D virtual objects on the real book based on pose and position estimation of the camera. Although many marker-based methods have been proposed, a picture book with markers looks unattractive. Our system does not use any markers. We describe four advantages for the virtual pop-up book. Firstly, 3D rendering helps readers understand scenes. Secondly, characters look lively by using motions. Thirdly, author of the picture book can use the new representation which mixes 2D and 3D rendering. Lastly, it can express time changes using animation.
We proposed a simple device named a Lift-mouse, which lifts up and down the palm of the user's hand to give the impression an appearance of solidity of a 2-D picture. Experimental results showed that an adequate stroke in a vertical direction, to express the height of the object displayed on the 2D screen, was shorter than the real height of the object. Additionally, the stroke was reduced using the visual effect by changing the size of the mouse cursor. It was confirmed that the lifting stroke of the device can be much smaller than the actual size of an object through these effects.
This paper describes an input interface for the hands/eyes-free interaction system called vCocktail system that is effective for wearable computers. The vCocktail system presents efficient spatialized voice menus around the user's head, which permits 99.7 percent correct listening to multiplexed voices up to four directions. As an input system to select the voice menu, we propose to use head gestures of two degrees of freedom rotation. Based on the data of subjects' head trajectories, we constructed a robust recognition algorithm. Ninety-eight-percent correct recognition was achieved even under highly noisy body movement by applying our algorithm.