Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Volume 10 , Issue 3
Showing 1-27 articles out of 27 articles from the selected issue
  • Type: Cover
    2005 Volume 10 Issue 3 Pages Cover1-
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (4331K)
  • Type: Index
    2005 Volume 10 Issue 3 Pages Toc1-
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (182K)
  • Type: Bibliography
    2005 Volume 10 Issue 3 Pages Misc1-
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (52K)
  • Naokazu Yokoya, Haruo Takemura
    Type: Article
    2005 Volume 10 Issue 3 Pages 271-
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (156K)
  • Ryosuke Ichikari, Asako Kimura, Fumihisa Shibata, Hideyuki Tamura
    Type: Article
    2005 Volume 10 Issue 3 Pages 273-280
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Compared with visual effects in movies or full CG animations, Mixed-Reality based content production, a technique to overlap CG image with real world scene, demands more strict real-time representation and consistency with real environment. Natural phenomena are sometimes too complicated to describe in MR environment by physical modeling. We pick up "Falling Natural Objects" such as cherry blossoms, autumn leaves and snow to represent for stage effects. We proposed an effective method to approximate movements of fluttering natural objects by normal random walk. A simulation confirmed that the proposed method was also adaptable to wind change and different CPU load to represent various polygon size objects.
    Download PDF (1462K)
  • Kazuhiko Takahasi, Takao Shibasaki, Yuichiro Akatsuka
    Type: Article
    2005 Volume 10 Issue 3 Pages 281-284
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    This paper describes a new AR system that it is easy to carry and work stand-alone. We developed OST (Optical See-Through) viewer that can compute positional posture with displaying 3D-CG at 10fps. Moreover, the calibration problems including this OST method are listed. Our two steps-calibration makes it able to use the viewer without complicated calibration when the re-installation.
    Download PDF (1208K)
  • Motoko Oe, Tomokazu Sato, Naokazu Yokoya
    Type: Article
    2005 Volume 10 Issue 3 Pages 285-294
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In the field of wearable augmented reality, absolute position and posture of user's camera are necessary to overlay CGs on real images. In this research field, two kinds of approaches have been investigated; sensor-based and vision-based methods. However, both kinds of conventional methods have a problem that it was not easy to use these methods in large and variable environments; usable environment is limited by selection of sensors in the sensor based method, and many artificial markers must be allocated in large environment in the vision based method to prevent accumulative errors. This paper proposes a novel vision-based camera position and posture estimation method which does not require sensors and artificial markers. In the proposed method, feature landmark database is constructed in advance by using large number of natural feature points in omni-directional image sequences. Next, by detecting feature landmarks from images captured by wearable camera, camera position and posture are estimated in every frame without accumulative errors.
    Download PDF (2281K)
  • Yusuke Nakazato, Masayuki Kanbara, Naokazu Yokoya
    Type: Article
    2005 Volume 10 Issue 3 Pages 295-304
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    To realize an augmented reality (AR) system using a wearable computer, the exact position and orientation of a user are required. We propose a localization method which is based on using an IR camera and invisible visual markers consisting of translucent retro-reflectors. In the method, to stably extract the regions of markers from the captured images, the camera captures the reflection of IR LEDs that are flashed on and off continuously. In experiments, we first describe the quantitative evaluation in computer simulation to decide the alignment of markers in real environments. We then carry out the localization experiments in real environments in which markers are installed based on the marker alignment decided by simulations. Finally, we develop a wearable augmented reality system using the proposed localization system.
    Download PDF (2376K)
  • Ryuhei Tenmoku, Masayuki Kanbara, Naokazu Yokoya
    Type: Article
    2005 Volume 10 Issue 3 Pages 305-311
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    By realizing augmented reality on wearable computers, it becomes possible to overlay annotations on the real world based on user's current position and orientation. However, it is difficult for the user to understand links between annotations and real objects intuitively when the scene is complicated or many annotations are overlaid at the same time. This paper describes a view management method which emphasizes user's gazed real objects and their annotations using 3D models of the scene. The proposed method effectively highlights the objects gazed by the user. In addition, when the gazed object is occluded by other real objects, the object is complemented by using an image, which is made from 3D models, on the overlaid image.
    Download PDF (2564K)
  • Masayuki Takemura, Yuichi Ohta
    Type: Article
    2005 Volume 10 Issue 3 Pages 313-321
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We propose a new scheme to restore gaze awareness among multiple users in a Shared Mixed-Reality Space. The gaze awareness in a mixed-reality space is lost as the side effect of wearing head-mounted displays. Our system synthesizes facial images in real time with arbitrary poses and eye expressions by using several photographs and a 3D-model of the user. The facial image is overlaid in order to diminish the HMD in his partner's view for the recovery of gaze awareness. Subjective evaluations of the proposed scheme are conducted by examining the perception of gaze direction between two users in the developed system.
    Download PDF (2291K)
  • Fumihisa Shibata, Asako Kimura, Takashi Hashimoto, Koki Furuno, Takash ...
    Type: Article
    2005 Volume 10 Issue 3 Pages 323-331
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We are now aiming at building of a general framework of mixed reality (MR) systems in which MR functions can be installed onto various kinds of mobile computers such as cellular phones, personal digital assistants (PDAs) and wearable computers. Our goal is not only to develop a software layer of application independent mobile platform but also to establish a general framework that can absorb the difference between types and performance of mobile computers. We developed a system architecture in which MR functions were distributed into a server and clients. Then we designed a content description language called SKiT-XML. This language is used to describe MR contents used in communication between the server and the clients. We implemented a cabling support system for OA floor as a sample application and verified the effectiveness of the framework.
    Download PDF (1540K)
  • Nobuchika Sakata, Takeshi Kurata, Masakatsu Kourogi, Hideaki Kuzuoka, ...
    Type: Article
    2005 Volume 10 Issue 3 Pages 333-342
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    The Wearable Active Camera/Laser (WACL) is a new collaboration tool that allows remote experts not only to independently set their viewpoints into the workplace of fieldworkers wearing the WACL but also to point to real objects directly with the laser spot. In this paper, we introduce the WACL and the functions such as the camera/laser head stabilization, and describe a user study to examine the advantages and limitations of the WACL interface in remote collaboration by comparing a head-mounted display and a head-mounted camera based headset interface. Results show that the WACL is more comfortable to wear, is more eye-friendly, and causes less fatigue to the wearer, although there is no significant difference in task completion time. In communication analysis, Experts talked more to workers wearing the WACL when detailed instructions were needed and experts talked more to workers wearing the headset when view changes were required. Finally we clarify improving the WACL so as to redress the communication asymmetries by enhancing the visual assistance.
    Download PDF (2111K)
  • Kengo Uratani, Takashi Machida, Kiyoshi Kiyokawa, Haruo Takemura
    Type: Article
    2005 Volume 10 Issue 3 Pages 343-352
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    It is often difficult to perceive the depth of virtual objects in an augmented reality (AR) environment due to poor depth cues and visual congestion. Existing visualization techniques that address the depth ambiguity problem are intended primarily for normal objects having 3D geometry. In this paper, we discuss depth visualization techniques for virtual annotations. Virtual annotations are used to display key information in AR applications such as navigation and maintenance systems. We begin by describing the depth ambiguity problem with regard to virtual annotations. Next, a number of possible solutions are discussed by introducing a metaphor of monocular vision. Finally, we describe three experiments that reveal the effectiveness and characteristics of the three visualization techniques implemented in the prototype system.
    Download PDF (2338K)
  • Keisuke Tateno, Masayuki Takemura, Yuichi Ohta
    Type: Article
    2005 Volume 10 Issue 3 Pages 353-361
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    The concept of "enhanced eyes" to restore gaze awareness in a collaborative mixed-reality space is proposed. Three "enhanced eyes" schemes are described: controlling the highlight in the eyes to aid awareness of eye contact, deforming the eyelids to enhance eye motion, and adjusting the rotation angle of the eyeballs to improve perception of gaze direction. The effectiveness of the schemes has been confirmed by subjective evaluations. The "enhanced eyes" emulate the natural appearance of the face and are effective not only to indicate gaze direction but also to create the feeling of gaze.
    Download PDF (2075K)
  • Satoshi Saga, Naoki Kawakami, Susumu Tachi
    Type: Article
    2005 Volume 10 Issue 3 Pages 363-369
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We propose a haptic teaching system that uses haptic device to teach some handwork skills. Specifically, we chose some tasks with pencil-like devices and examined how to teach expert's handwork skills. We propose a new haptic teaching method, in which the haptic device produces force that is opposite in direction. The operator try to cancel the force and consequently, necessary force is "proactively" generated. Our hypothesis is that this "proactiveness" is essential for haptic teaching. We made a prototype system and compared our methods with existing teaching methods.
    Download PDF (1134K)
  • Kenichi Hayashi, Hirokazu Kato, Shogo Nishida
    Type: Article
    2005 Volume 10 Issue 3 Pages 371-380
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In Augmented Reality, the occlusion between virtual and real objects has to be managed correctly so that users can look at the natural scene. In order to overcome this problem, the depth of the real world from the user's viewpoint is employed. However, if the system is used in an environment including moving or deformable objects, the depth has to be measured in real-time. In this paper, we propose a method for real-time stereo matching using a contour based approach to acquire the accurate depth of the boundary of real objects.
    Download PDF (2668K)
  • Yuichi Bannai, Kazuhiro Miyasa, Yuji Suzuki, Hiroshi Shigeno, Kenichi ...
    Type: Article
    2005 Volume 10 Issue 3 Pages 381-389
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We propose a remote collaboration system where each of the worker and the expert is wearing a video see-through HMD. In this system, the expert can interact with mixed reality objects in the worker's space sharing the stereo images form the worker's viewpoint. While the system has the WYSIWIS feature as a merit, it has the defect that the expert's field-of-view is completely controlled by the worker. We conducted an experimental evaluation to investigate the effects of these features, and conclude that the expert can assist the worker's task using this system.
    Download PDF (1831K)
  • Kiyohide Satoh, Shinji Uchiyama, Hiroyuki Yamamoto
    Type: Article
    2005 Volume 10 Issue 3 Pages 391-400
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    This paper describes a new robust registration framework, which is named "UG+B", for Augmented or Mixed Reality (AR/MR) applications. Our methods measure pose of a head-mounted display (HMD) using not only an image captured by a user's view camera mounted on the HMD, but also images captured by bird's-eye view cameras each of which observes the HMD from a third-person's viewpoint, as well as an orientation measured by a sensor mounted on the HMD. In our methods, the markers placed in the scene and that on the HMD are tracked on the user's view and the bird's-eye view image respectively. Then, the pose of the HMD is estimated to minimize the sum of the re-projection error for all markers. Furthermore, the inclination angles measured by the orientation sensor are used as known values to constrain the pose, thus, the remaining parameters, i.e. the position and azimuth angle, are deduced by image information. In addition to theoretical discussions, this paper demonstrates the effectiveness of our methods by experiments in comparison with conventional vision-based registration methods.
    Download PDF (1885K)
  • Daisuke Kotake, Shinji Uchiyama, Hiroyuki Yamamoto
    Type: Article
    2005 Volume 10 Issue 3 Pages 401-410
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    This paper describes a calibration method of markers which are used for registration in MR applications. In vision-based registration methods where multiple markers are utilized, it is necessary that the geometric information of the marker arrangement such as their positions and orientations be known in advance. In this paper, we propose a hybrid method combining the conventional bundle adjustment method with some constraints on the marker arrangement which are obtained a priori. After showing the basic framework that achieves our concept, the specific solutions for some typical constraints are described. Then, some experiments are shown to prove the effectiveness of our method.
    Download PDF (1762K)
  • Yoichi Aoki, Sawako Hanatani, Chinatsu Horii, Kosuke Sato
    Type: Article
    2005 Volume 10 Issue 3 Pages 411-419
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    This paper presents a new virtual reproduction methodology on color property using an optical see-through display with a special half-mirror with variable transmittance. The model of this method adjusts the mixture ratio between real and virtual scene by estimating what color should be displayed and how much the most effective transmittance of the variable filter is. This method of controlling the most suitable transmittance can expand the range of the color reproduction quality on Mixed Reality worlds. In this paper, we explain the mathematical model of the color restoration and experimental results for changing the color appearance of a real degraded painting, Japanese Ukiyoe, for virtual heritage.
    Download PDF (1703K)
  • Kenji Funahashi, Sidney Fels
    Type: Article
    2005 Volume 10 Issue 3 Pages 421-428
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    In this paper, we describe the method to express water wave and splash for virtual swimming as artwork; Swimming Across the Pacific. Our water simulation is based on the method in which the water surface is modeled as a thin film. For the sea surface plane waves, we use recurrence relations to solve the partial differential equation for the 2D wave as is standard practice. In this method, the sea surface is modeled as a mesh. We approximate the volume of the swimmer with bounding boxes to improve performance and determine when they intersect any of the water grids when they move. Then waves are then made and propagated. If the height is over a pre-defined threshold value and/or the swimming avatars motion interferes with waves, some particles are made in the air to simulate splashing.
    Download PDF (3111K)
  • Takeshi Oishi, Tomohito Masuda, Ryo Kurazume, Katsushi Ikeuchi
    Type: Article
    2005 Volume 10 Issue 3 Pages 429-436
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    We digitally restored the original Great Buddha and the main hall of Todaiji temple. Todaiji temple has been destroyed by natural and artificial disasters and rebuilt for a number of times. As a result, the shapes of the current Great Buddha and its main hall are slightly different from the original states. We reconstructed the 3D models of the original Great Buddha and its main hall. The 3D model of the original Great Buddha was reconstructed by morphing the 3D model of the current Great Buddha, which was obtained by laser scanning and Modeling-from-Reality techniques. The 3D model of the original main hall was reconstructed by assembling the partial 3D models of the main hall of Toshodaiji temple. We combined these models and visualized the original state of Todaiji temple by Computer Graphics techniques.
    Download PDF (1997K)
  • Keizo Watanabe
    Type: Article
    2005 Volume 10 Issue 3 Pages 437-442
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    An original method for visualizing 4D hyperobjects is presented. The color of a point on the 2D projection of a 4D hyperobject is determined by the geometry information integrated along the intersection between hypersurfaces of the hyperobject and a 'planar ray' from the observer's eye, a set of these planar rays covers the whole hypersurface area in 4D space. The geometry information reads out with light, which is pseudoreflected by not the hypersurface itself but the displacement of its normal to allow the observer to be aware of hypersurface boundaries and curving on the hyperobject. This method can visualize both hyperobjects made of flat hypersurfaces like 4D polytopes and ones made of curved hypersurface like 4D hypersphere.
    Download PDF (1091K)
  • Seiji Tomokuni, Shinichi Hirai
    Type: Article
    2005 Volume 10 Issue 3 Pages 443-452
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Deformable soft objects such as food and tissue show both elastic and viscoplastic properties, and are referred to as rheological objects. A physical model of virtual objects has been developed, but computer power is insufficient to compute large virtual rheological objects in real-time. This paper describes the real-time computation of the deformation of virtual rheological objects on an FPGA (Field Programmable Gate Array). An FPGA is an LSI in which logical circuit is rewritable. FPGAs enable us to perform computation of virtual rheological deformation in parallel. We designed a logical circuit to compute the deformation of virtual rheological objects on an FPGA, and we realized a system that computes the deformation 8.26 times as fast as a PC with 1.7 GHz CPU. Our estimations show that an FPGA is capable of computing the deformation 75.2 times faster than a PC.
    Download PDF (1418K)
  • Type: Appendix
    2005 Volume 10 Issue 3 Pages 453-455
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (179K)
  • Type: Appendix
    2005 Volume 10 Issue 3 Pages App1-
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (28K)
  • Type: Cover
    2005 Volume 10 Issue 3 Pages Cover2-
    Published: September 30, 2005
    Released: February 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (522K)
feedback
Top