Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Volume 7 , Issue 2
Showing 1-25 articles out of 25 articles from the selected issue
  • Type: Cover
    2002 Volume 7 Issue 2 Pages Cover1-
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (4415K)
  • Type: Index
    2002 Volume 7 Issue 2 Pages Toc1-
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (172K)
  • Type: Index
    2002 Volume 7 Issue 2 Pages Toc2-
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (81K)
  • Hideyuki Tamura, Hiroyuki Yamamoto
    Type: Article
    2002 Volume 7 Issue 2 Pages 117-
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (178K)
  • Hirokazu Kato, Norio Shiosaki, Keihachiro Tachibana
    Type: Article
    2002 Volume 7 Issue 2 Pages 119-128
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In this paper, we propose a computer-vision based registration method for augmented reality based on template matching. Computer-vision tracking methods for augmented reality applications typically use special fiducial markers such as squares or circles. Our new method uses a black square fiducial to obtain the initial tracking condition, but does not use it in the iterative tracking phases. Several natural feature points are extracted from the tracked object by off-line image analysis. While tracking the object, some of these feature points are selected for template matching and the object pose and position are calculated. Even when the initial tracking square is no longer in view, our method produces robust tracking at real time frame rates.
    Download PDF (2384K)
  • Kiyohide Satoh, Mahoro Anabuki, Hiroyuki Yamamoto, Hideyuki Tamura
    Type: Article
    2002 Volume 7 Issue 2 Pages 129-137
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In this paper, a registration method for outdoor wearable mixed reality (MR) systems is described. Our approach is based on using a high precision gyroscope, which can measure 3DOF angle of head direction accurately, but with some drift error. We solved the drift problem with a vision-based drift compensation algorithm, which tracks natural features in the outdoor environment as landmarks from images captured by a camera on an HMD. This paper first describes the detail of the vision-based drift compensation method. Then, a calibration method for the orientation sensor is proposed. Finally, using results from an actual wearable MR system, a comparison of registration error with and without vision-based drift compensation demonstrates the feasibility of the proposed method.
    Download PDF (1992K)
  • Masakatsu Kourogi, Takeshi Kurata, Katsuhiko Sakaue
    Type: Article
    2002 Volume 7 Issue 2 Pages 139-149
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In this paper, we describe an improved method of personal positioning and orientation using image registration between input video frames and panoramic images captured beforehand. In our previous work, we proposed the method of image registration with an affine transform. However, the affine transform is generally not capable of image registration between a frame and a panorama. We improved the previous method so that it can estimate projective transform parameters without severely increasing computational cost. We also improved the method to be robust with respect to lighting changes by using the weighted sum of absolute difference of both brightness and its gradient between images. Inertial sensors tied with the camera are used to improve robustness and processing throughput and delay. We confirmed that this improved method could estimate image registration parameters under conditions that hindered the previous method. Its computational cost increased by only 10-20% and its software implementation was capable of real-time processing.
    Download PDF (2533K)
  • Takayuki Uchida, Kosuke Sato, Seiji Inokuchi
    Type: Article
    2002 Volume 7 Issue 2 Pages 151-157
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Several methodologies on Mixed Reality displays focus how to represent the occlusion phenomenon among real objects and CG objects. Generally, optical seethrough ways can not represent the correct occlusion phenomenon: a CG object overlaps on the real one translucently. This paper proposes an optical system with Digital Micro Mirror Device (DMD), a new type of optical see-through display which can represent the correct occlusion among the real and virtual images. A DMD consists of some hundred thousands of micro-mirror, that one micro-mirror corresponds to one pixel of the input image, and each mirror can switch its reflective direction. A reflective optical system with DMD, object and eye lenses is able to mix the transparent view of the real scene and another image according to arbitrary pixels with arbitrary mixture rate. Unlike other optical see-through displays with a LCD shutter and a half-mirror, there is an advantage in optical efficiency due to reflective and switching optics.
    Download PDF (1342K)
  • Kiyoshi Kiyokawa, Mark Billinghusrt, Daniel Belcher, Arnab Gupta
    Type: Article
    2002 Volume 7 Issue 2 Pages 159-168
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We present an analysis of communication behavior in face-to-face collaboration using a multi-user Augmented Reality (AR) interface. We have conducted two experiments. In the first, we compare collaboration with AR technology to more traditional unmediated and screen-based collaboration. In the second we compare collaboration with a variety of different AR displays. Several measures are used to analyze communication behavior, and we find that users exhibit many of the same behaviors in a collaborative AR interface as in face-to-face unmediated collaboration. User communication behavior also changes with the type of AR display used. We describe implications for the design of collaborative AR interfaces and directions for future research.
    Download PDF (1966K)
  • Kenji Tojo, Shinsaku Hiura, Seiji Inokuchi
    Type: Article
    2002 Volume 7 Issue 2 Pages 169-176
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We developed a direction system of real-world operation from distant site. The system consists of several sets of cameras, projectors and PCs connected via network each other. At first, the 3-D shape of the object is measured using pattern light projection method, and it is sent to the distant PC. A supervisor can observe the CG of the object from any viewpoint and draw annotation figures on the CG. The direction message is sent to the real field and projected onto the object using projectors. The projected annotations are well aligned geometrically because the all cameras and projectors are calibrated with single reference object. The worker is free from any wearing equipment, ex. HMD, and multi projectors avoid the problem of occlusion by the worker body. Direction of alignment task of both existing and new object are also implemented.
    Download PDF (1984K)
  • Itaru Kitahara, Yuichi Ohta
    Type: Article
    2002 Volume 7 Issue 2 Pages 177-184
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In this paper, we introduce our research for realizing a 3D video display system in a very large-scale space such as a soccer stadium, concert hall, etc. We introduce a method to describe the shape of a 3D object with a set of planes in order to synthesize a novel view of the object effectively. The most effective layout of the sweep-planes can be determined based on the relative location of the observer's viewing position, multiple cameras, and the 3D object. We introduce a method to control the LOD of the 3D shape model by adjusting the number of sweep-planes. The data size of the 3D model and the processing time can be reduced drastically. The effectiveness of our proposed methods was demonstrated by experimental results.
    Download PDF (1816K)
  • Takaaki Endo, Akihiro Katayama, Hideyuki Tamura, Michitaka Hirose
    Type: Article
    2002 Volume 7 Issue 2 Pages 185-192
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    This paper describes a new approach for building a photo-realistic large-scale virtual world. For this purpose, an image-based rendering (IBR) technique seems promising compared to conventional CG techniques. We developed a new image interpolation method which can generate novel images with little distortion. By dividing base image into layered images and applying image interpolation procedure layer by layer, we can avoid the problem of occlusion. This method makes it possible for a user to freely walk around positions where base images are sampled. In order to record existing large-scale scenes with the information of the position and the orientation, we developed a novel image capturing system. In this paper, some experimental results are shown as well as the details of the approach and the system.
    Download PDF (1595K)
  • Takuya NOJIMA, Dairoku SEKIGUCHI, Masahiko INAMI, Susumu TACHI
    Type: Article
    2002 Volume 7 Issue 2 Pages 193-199
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    The previous researches on augmented reality have been mainly focused on augmentation of visual or acoustic information. However, the human could get information not only through vision and acoustics, but also through haptics. The haptic sensation is very intuitive hence some researchers are focus on using haptic sensation in an augmented reality system. In these researches, the haptic sensation is generated based on static data such as generated from CAD, CT, etc. However, these systems are hard to respond to changing real environment in a real time. In this paper, we propose a new concept for the augmented reality of haptics, the SmartTool. The SmartTool could respond to the real environment by using real time sensors and a haptic display. The sensors on the SmartTool measure the real environment then send us that information through our haptic sensation. Furthermore, we will describe about the prototype system we have developed.
    Download PDF (1242K)
  • Naoaki Hirota, Kenji Ozawa, Yoiti Suzuki
    Type: Article
    2002 Volume 7 Issue 2 Pages 201-209
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We proposed a method to construct an auditory virtual world by means of binaural recording and reproduction. This method has a difficulty to apply to a real world because the transfer function from a sound source to a listener must be measured individually in an original sound field. In this paper, how the simplification of the individuality affects the performance of the method was examined in a real environment. Here, the eyes saw a loudspeaker in the real world while the ears heard a sound from the loudspeaker in the virtual world with headphones. Results showed that the individuality could be disregarded to some extent while maintaining the accuracy of the virtual world. This suggests a possibility of a substantial simplification of the difficulty of this method.
    Download PDF (1421K)
  • Kazuhiko Kobyashi, Shinobu Ishigame, Hideo Kato
    Type: Article
    2002 Volume 7 Issue 2 Pages 211-218
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We propose a training system of manual arc welding by means of Mixed Reality technique. The system consists of a face-shield-like HMD and a virtual electrode unit. The former unit is a video-see-through HMD having almost the same shape and size as an actual face shield. The latter unit consists of a virtual electrode, a motor for adjusting electrode extension length, an electromagnet, and a reflection-type photo sensor. Moreover, a training assistant function is implemented in which inclination angle, feed rate of the virtual electrode, and so on during training are recorded, and their differences between novice and skilled welders are displayed. It is confirmed the system is effective in training novice welders.
    Download PDF (1615K)
  • Toshikazu OHSHIMA, Tsuyoshi KUROKI, Toshihiro KOBAYASHI, Hiroyuki YAMA ...
    Type: Article
    2002 Volume 7 Issue 2 Pages 219-225
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Visual effects (VFX), which composite computer-generated imagery (CGI) onto live-action scenes in a feature film, usually require a sequence of images that is manually generated in the post-production process. In an alternative approach, mixed reality (MR) merges the real and virtual worlds to achieve real-time interaction between users and MR space. In this sense, MR creates real-time VFX seen from an observer's viewpoint. These two fields, which used to be considered independent, will rapidly affect one another. The VFX techniques, especially photometric registration, are useful in MR systems. On the other hand, MR technology can be utilized in film production. This paper introduces the first MR system that is specially implemented for a rehearsal stage in filmmaking.
    Download PDF (2862K)
  • Takeshi Naemura, Takuya Nitta, Atsushi Mimura, Hiroshi Harashima
    Type: Article
    2002 Volume 7 Issue 2 Pages 227-237
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We propose the concepts of Virtual Light and Virtual Shadow with the aim of achieving a Mixed Reality Environment focused on shadows. In this proposal, we divide the concept of Virtual Shadow into four categories, and among them, implement four types of interactive applications: (a) real to virtual shadow for rigid objects, (b) real to virtual shadow for non-rigid objects, (c) image-based virtual to virtual shadow, and (d) virtual to real shadow. In these applications, we can see the shadow of a real object projected onto the virtual world and vice versa. These proposed concepts and applications should contribute to the realization of a Mixed Reality Environment that provides a novel sense of visual interaction.
    Download PDF (2907K)
  • Shojiro NAGATA
    Type: Article
    2002 Volume 7 Issue 2 Pages 239-246
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    The "Vergence Fusional Stereoscopic Limits (VFSLs)" of crossed and uncrossed binocular disparities in a stereoscopic display were measured to get the distributions of the VFSLs in a group of young subjects (392 subjects, aged 18-40 years, mean 21.4, S.D. 3.9) using a new autostereoscopic display and measuring system. Cumulative frequency ratios of model log normal distributions fitted to the data showed percentiles as follows: median 3.05, 84% percentile (-S.D.) 2.04 deg arc for uncrossed disparity; median -3.34, 84% percentile -1.84 deg arc for crossed disparity. Factors such as eye position (phoria) that might underlie individual differences, and also the relation with visual stress are discussed. These distribution data will be useful for designing stereoscopic images for comfortable viewing.
    Download PDF (1630K)
  • Yasushi Ikei, Mariko Yamada
    Type: Article
    2002 Volume 7 Issue 2 Pages 247-255
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    This paper introduces a presentation method of textures for a cutaneous sensation evoked by vibratory pin-arrays driven on a sensation intensity map that is derived from the surface height-map of an object. The texture presentation was examined employing both the TextureDisplay developed by the authors and the Optacon II. The method defined a basic shape, featured by two parameters, of histogram transformation which drew a sensation intensity map from a measured height-map. Being adjusted to appropriate parameters, the presentation was able to evoke a texture sensation that was close to the real in its intensity, contrast, roughness and hardness. The Optacon proved to render a good representation of textures based on the sensation intensity map as well after the sensation scaling was performed with regard to the vibration interval control.
    Download PDF (1395K)
  • Kaoru Tanaka
    Type: Article
    2002 Volume 7 Issue 2 Pages 257-266
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    The tracking technique of the visual information which is different from eye-camera was devised.HMD and VR technology were applied to materialize this method. This system restricts the movement range of the gaze point by using the mask to chase information on the sight.This record active scan behavior by an experiment person's moving a mask.The process that the image used with the medical treatment( CTG: cardiotocogram)was read was examined by system.As a result,scan time and the amount of information varied in the place where it watched in the visible area of HMD that view was restricted.The scan time and the information were small in the area of the circumference.This inclination was suggested system can wave form using this was being amount of that retina sensitivity and interest in the object were causes. An interest measure by the scan time and the area was proposed from these results.
    Download PDF (1579K)
  • Raita Kawasaki, Yoshifumi Kitamura, Fumio Kishino
    Type: Article
    2002 Volume 7 Issue 2 Pages 267-273
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We propose a method to extract the motion individuality from the form of a sport expert and to apply it to the characters that have different segment length. For this purpose, a standard motion is established by removing a subjective motion from an actor who conducts a sport skillfully, and the surface features (angular velocity and torque of the joint) is extracted for each joint angle difference in the each animation frame between the original motion of the sports expert and the standard motion. The surface features and relative timing in the deep structure are transmitted to the characters having different segment length by applying them to the standard motion of the characters whose subjective motion is also removed in advance.
    Download PDF (1256K)
  • Tomokazu Sato, Masayuki Kanbara, Naokazu Yokoya, Haruo Takemura
    Type: Article
    2002 Volume 7 Issue 2 Pages 275-282
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Three-dimensional (3-D) models of outdoor scenes are widely used for object recognition, navigation, mixed reality, and so on. Because such models are often made manually with high costs, automatic 3-D reconstruction has been investigated. In related work, a dense 3-D model is generated by using a stereo method. However, such approaches cannot use several hundreds images together for dense depth estimation because it is difficult to accurately calibrate a large number of cameras. In this paper, we propose a dense 3-D reconstruction method that first estimates extrinsic camera parameters ofa hand-held video camera, and then reconstructs a dense 3-D model ofa scene. In the first process, extrinsic camera parameters are estimated by tracking a small number of predefined markers of known 3-D positions and natural features automatically. Then, several hundreds dense depth maps obtained by multi-baseline stereo are combined together in a voxel space in order to construct a 3-D model with textures. Experiments have shown that we can acquire a dense 3-D model of the outdoor scene accurately by using several hundreds input images captured by a hand-held video camera.
    Download PDF (2349K)
  • Mieko Ohsuga, Hiroshi Sato, Masakatsu Fukushima
    Type: Article
    2002 Volume 7 Issue 2 Pages 283-286
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    The present paper refers to the possibility of applying Bedside Wellness System to the mental activation of dementia elderly. The system provides a virtual walk through a natural landscape or a rural scenery by using virtual reality technology and foot devices installed on the bed. It activates users by multimodal stimuli, that is, real moving image, sounds, and fragrance. Nineteen participants aged 64-98 (17 dementia patients and 2 without dementia symptoms) experienced the virtual walk. The activation effects were observed in changes in facial expressions and in facilitation of conversations and gestures.
    Download PDF (1171K)
  • Type: Appendix
    2002 Volume 7 Issue 2 Pages 287-
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (113K)
  • Type: Cover
    2002 Volume 7 Issue 2 Pages Cover2-
    Published: June 30, 2002
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (479K)
feedback
Top