In this paper, we propose a method for accurate image overlay on headmounted displays (HMDs) using vision and accelerometers. The proposed method is suitable for video see-through HMDs in augmented reality applications but not limited to them. Acceleration information is used for predicting the head motion to compensate end-to-end system delay and to make the vision-based tracking robust. Experimental results showed that the proposed method can reduce alignment errors within 6 pixels on average and ll pixels at maximum, even if the user moves his/her head quickly (with 10 [m/s^2] and 49 [rad/s^2] at the maximum).
We propose a linear algorithm that is useful for realizing the geometric registration between the views of real scene and virtual object in the image-based rendering framework. It can handle, in a unified framework, the novel view synthesis of virtual object based on the 3 view matching constraints and the recovery of camera pose that is necessary for the base image selection. Feasibility of the algorithm is demonstrated by using ground-truthed synthesized data and real scene data.
We describe a fiducial marker tracking method for an augmented reality system. Square markers are used for fast precise registration and a camera model that includes a distortion function is also employed. We also describe a system calibration method. In order to display virtual objects on a head mounted display (HMD) screen the relationships between the HMD, human eyes and head mounted camera are very important. Camera calibration is also vital for accurate marker tracking. We describe a novel HMD calibration technique which does not require much effort from the user and provides very good usability. With good calibration our tracking method provides very accurate tracking and runs at 30 frames/sec on a SGI Octane computer.
This paper proposes a multiview stereo algorithm that is the integration of two algorithms with/without occlusion handling mechanism. The algorithm is useful to improve the sharpness of depth map at occluding boundaries obtained by video-rate stereo machines without occlusion handling capabirity. In order to realize the integration, we have developed an algorithm for detecting occluded regions and a new correspondence algorithm with occlusion handling mechanism. Using the evaluation values that are computed in the correspondence search process without considering occlusion, occluded regions can be extracted with small additional computation cost. And we develope multiview stereo using sort-method for occlusion handling. We show experimental results, and prove effectiveness of the algorithm.
This paper presents an overview of our efforts on mixed reality. These efforts span two aspects: how to create models of virtual objects, and how to integrate such virtual objects with real scenes. For model creation, we have developed two methods, the model-based rendering method and the eigen-texture method, both of which automatically create models by observing the real objects. For the integration of a virtual object with a real scene, we have developed a method that renders virtual objects based on real illumination distribution. We have successfully tested the proposed methods by using real images to demonstrate their effectiveness.
This paper describes a method to compress ray-space data which is used in realtime rendering of augmented virtuality systems. In the method, instead of 6D ray-space, a 3D ray-space is adopted to reduce initial input data. To achieve real-time decoding, multiviewpoint images and a correspondence table are used as 3D ray-space data and these images are compressed using hierarchical DCT coding. The implementation results in CyberMirage system, which can build a photo-realistic virtual environment, are also shown in this paper.
Recently, a new technique, called image-based rendering, has been investigated for synthesizing photo-realistic views from images. To date, most of the works on it has been concentrated on static scenes. In order to cope with 3-D scenes in motion, we must establish the ways of processing multiple video sequences in real time, and constructing accurate camera array system. In this paper, the authors realize a real-time system for rendering arbitrary views of 3-D scenes in motion. The system consists of a video-based rendering engine and an array of sixteen cameras, which has the function of software adjustment. Experimental results show that appropriate views are synthesized in real time, according to the observer's viewpoint.
This paper proposes a system to produce 3D computer animation based on the human performance from video image sequences. This system uses a 3D model of the human body for motion capturing as well as generating CG images. In fact, a human action is observed by video cameras allocated around the actor. Fitting the 3D human model to the human images, the human action are measured as series of 3D pose of human body. This method needs not to attach any equipment and marker to the human body. We have made a database of various kinds of human actions. An action required in a story-telling can be generated by warp and combination of actions from the database. We produced an animation video originated from a fairy tail in the selection of Grimm brothers by this animation system.
Mixed Reality (MR) conjuncts the real world with a virtual world created by a computer. Optical superimposing has been a promising MR displaying technology so that it does not degrade the quality of the user's view. Unfortunately, optical see-through displays can not represent correct occlusion phenomenon; a CG object can not occlude a real object, but overlaps on the real one translucently. In this paper an architecture of an optical see-through MR display, which solves the above problem, is proposed. It can represent correct occlusion phenomenon among the real and virtual objects. Two types of new prerequisite elements, a realtime range finder and an active pattern light source with a video projector, are involved into the display. The dynamic active pattern light projection illuminates only not occluded portions of the real object in darkroom, according to the interference among 3D models of the virtual object and real one which is acquired by the range finder. Therefore, the occluded portions of the real object is not visible due to no illumination; the CG object occludes the real object approximately. A prototype display based on the architecture shows us interesting interactions between the virtual world and the real world including user's hand operations.
The projection-based virtual reality (VR) system, which gives us the illusion of immersion, has explored the new field of technology. In the system, we can get into the virtual world projected onto 3-D screens. On the other hand, many researchers has started to investigate how to mix the physical world and the virtual world. This field of technology is called "Mixed Reality." In this paper, the authors focus on the realization of the mixed reality in the projection-based VR system. For this purpose, a mirror is utilized as a handy 3-D display for virtual objects. The virtual objects are displayed on the screens and reflected in the mirror. In the proposed system, we can see the virtual world projected onto the screens, the physical object in front of the screens, and the virtual object in the mirror, simultaneously. Experimental results show that the proposed method is potentially applicable to the next-generation 3-D interfaces and new kind of artistic effects.
This paper proposes a haptization system that allows the blind to touch untouchable objects. A stereo vision system is adopted to get the range data that represent the untouchable objects. A tension based haptic display, that is a point contact type, is used to present the shape to the blind. The method that transforms range data into smooth surface is described. The communication between the stereo vision system and the haptic display is discussed to realize real-lime touch. The experimental system has been developed to evaluate the proposed system.
Machine tool operations such as small-diameter-deep-hole drilling and boring are well known as skilled tasks because of low stiffness of the cutting tools. Artificial reality technique is useful to develop the simulator for skill training of the machine tool operations. In the present paper, the simulation of the boring operation with a lathe has been investigated. Since the size, shape and tactile feeling of the tool feed handle of the lathe are important in the skill training, an actual handle should be used in the simulation. In addition, operator's head movement during the operation should be taken into consideration. So, the authors have adopted the mixed reality technique to realize the simulator. Simplified dynamical and geometrical models have been introduced and their parameters have been identified by an optimization method from experimental data of the actual lathe operation.
A mixed reality amusement system employing foot interface, "Fantastic Phantom Slipper", has been developed. The system includes slipper-like wearable devices, optical motion capture and a hemispherical floor screen with a video projector. Phantom sensation is elicited by vibrators in the soles of the slippers to transmit information from floor. Most important concept of this work is to experience cyber worlds through human actions and sensations intuitively. With this system, players can enjoy walking into, and feel cyber worlds with their own feel as they do in real worlds. In this paper, the concept of this work, system configuration, amusement content and players' reactions are described.
This paper introduces a collaborative shooting game - "RV-Border Guards," which uses Mixed Reality (MR) technologies. This system is designed to emphasize MR-specific features for entertainment. Three players wearing HMDs cooperatively battle with virtual invaders flying around them in the MR space. Each player is armed with a virtual gear such as a helmet and a gun, and can intuitively interact with the MR space using easy gestures. Total reality of the MR space is carefully tuned. This project tries to achieve a novel multi-player entertainment, which has never been realized without MR technologies.
An indoor distribution substation simulator for the training of substation maintenance personnel was developed. In this development, the major consideration was focused on what and how the user experiences in the virtual space which was generated by the simulator. And more important consideration was how long the developer can enable the user to experience virtual space without discomfortness or sickness. There are many factors to decide the appropriate experience time and the contents of the simulator. The experience in the virtual space brings various physiological and psychological influences on the users. These effects depend heavily on the environment of each simulator. The physiological and psychological influences on the users in the simulator were experimentally studied. The 40 minutes experience and the experience with delay were done with the simulator. The influences on the sense of sight and the sense of balance, subjective symptoms and the influences of system's delay on the users were measured and considered.
CompleXcope is a CAVE VR system installed at Theory and Computer Simulation Center, National Institute for Fusion Science. The purpose of the CompleXcope system is the scientific visualization and the interactive three-dimensional data analysis of supercomputer simulations on three-dimensional complex phenomena including nuclear fusion plasma, fluid dynamics, molecular dynamics of polymers. Several elemental visualization modules for three-dimensional data analysis of scalar and vector fields in a virtual space are developed by OpenGL and the CAVE library. Combining these modules, CompleXcope application programs are developed and actively used in physics research.
This paper describes the HapticWorkBench, which can support haptic feedback device. There are various types of haptic devices which have various shapes and functions. Most of immersive displays are rear projection type display and can not be used with some types of haptic devices because those devices hide the images projected on the screen. We developed an immersive display called HapticWorkBench which can be used by both front projection and rear projection freely.Therefore,it can be used with various types of haptic devices. The callibration method and the influence of the tracking error are also discussed.
A problem in case of implementing interface using only audio media is the way of its presentation scheme such as standing position and contents recognition of the object which is a representative of information. Our proposed the interface TAS(Total Audio interface System) has the input function using voice and the presentation function using three dimensional audio. Hence this interface enables the object selection only with audio media. In this paper we propose a scheme to present several objects expressed by audio media using the menu. This system gives the user easy recollection the contents of the object by the method of presenting other sound which was heard at the place where the sound was memorized. The interface TAS which applies this function is evaluated in an example selection of the sound which was often heard in every day life. As a result, even if several objects were located in hierarchical structure, it is known that this selection system is easy to use.