Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Volume 14, Issue 2
Displaying 1-21 of 21 articles from this issue
  • Article type: Cover
    2009 Volume 14 Issue 2 Pages Cover1-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (4535K)
  • Article type: Bibliography
    2009 Volume 14 Issue 2 Pages Misc1-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (322K)
  • Article type: Index
    2009 Volume 14 Issue 2 Pages Toc1-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (124K)
  • Article type: Index
    2009 Volume 14 Issue 2 Pages Toc2-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (51K)
  • Takefumi Ogawa, Minoru Kobayashi, Jun Yamashita
    Article type: Article
    2009 Volume 14 Issue 2 Pages 145-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (141K)
  • Article type: Appendix
    2009 Volume 14 Issue 2 Pages 146-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
  • Masaki Hayashi, Hiromu Miyashita, Ken-ichi Okada
    Article type: Article
    2009 Volume 14 Issue 2 Pages 147-155
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Presence is the sense, or the subjective experience, of being in a place or an environment. While many researchers have attempted to devise a means enabling us to experience presence, this remains difficult to achieve due to the lack of a valid estimation method for KANSEI information. Objective information is obtained through our five senses, whereas KANSEI information is "subjective information" reflected in personal feelings, experiences, and positions. To address this issue, we propose here an estimation method for KANSEI information utilizing physiological information in virtual reality space. In this method, we define a physiological matrix that associates physiological information with KANSEI information. Experimental results indicate that this matrix is a valid method for estimating presence in virtual space.
    Download PDF (1582K)
  • Hidenori Watanave
    Article type: Article
    2009 Volume 14 Issue 2 Pages 157-162
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    We propose a new design methodology for 3Di (3D Internet). The designer should pay attention to 1. the visibility, 2. direct accessibility, and 3. spatialization of the target object. We have applied this methodology for producing and designing plenty of architectural and artistic space in 3Di. This paper explains the concept and concrete design methodology thorough applied artworks.
    Download PDF (1574K)
  • Helmut Prendinger
    Article type: Article
    2009 Volume 14 Issue 2 Pages 163-169
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    This paper describes the Global Lab, a novel platform for interaction, collaboration, and simulation, which is based on the 3D online virtual world of "Second Life". One of the key motivations for developing the Global Lab is to support the implementation of an environmental friendly society. We consider 'Virtual Mobility' technology, i.e. the use of digital alternatives to physical movement, as an important contribution to eco-friendly behavior. In order to make life in the digital counterpart world natural and convenient, we developed technologies addressing both intuitive in-world communication and seamless cross-world communication between the virtual world and the real world. In addition, we demonstrate how the Global Lab can be used as a testbed for real-world sensor-based systems.
    Download PDF (1006K)
  • Kouichi MATSUDA
    Article type: Article
    2009 Volume 14 Issue 2 Pages 171-176
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Previous researches on shared virtual worlds mainly focused on how to share information among users, such as avatars, shared objects, and their behaviors. In our experiments with the Personal Agent-oriented Virtual Society "PAW^2" we found that providing unshared information among users is useful for users and world developers to realize and manage shared virtual worlds. In this paper, we present how we have introduced unshared information in PAW^2, the collection of user experience data based on usage of unshared information, the evaluation of collected data, and then discuss the issues we have found and future research about the possibility of "unshared" shared virtual worlds.
    Download PDF (1170K)
  • Hiromu Miyashita, Masaki Hayashi, Ken-ichi Okada
    Article type: Article
    2009 Volume 14 Issue 2 Pages 177-184
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Intuitive FOV movement is possible in VR space using a head-tracker attached HMD. However, objects of attention are not exactly detected with only measurement of head movements. An eye-tracker is required to determine gaze areas, but they have bad influences on using binocular HMD. In this study, electrooculography (EOG) has been mounted on the HMD with a head-tracker. By this EOG, the screen of HMD was not hindered and head movements were allowed. Disk electrodes and 3-axis gyro-sensor were attached on the HMD to evaluate performance of this system. Additionally, digital filters were used as denoising algorithm. Experiments showed that time of detected eye movements was 0.37 seconds and accuracy of EOG was 69%. These findings suggest that rough gaze directions are detectable by combining the EOG and the gyro-sensor.
    Download PDF (1839K)
  • Yuta Okajima, Shun Yamamoto, Yuichi Bannai, Kenichi Okada
    Article type: Article
    2009 Volume 14 Issue 2 Pages 185-192
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    In this paper, we propose Mixed Reality remote collaboration system reflecting the attention of the remote user on a work object. In remote collaboration, to recognize where remote user is and what remote user do is essential for collaborative work with partner. Previously, a video picture and an avatar were used to understand the remote situation mainly. But in remote collaboration with these representations, user had to take its eyes off a work object to recognize these, and look for them additionally. We propose the system which provides easy understanding of remote user to user in remote collaboration which uses an object. A user can collaborate with a remote user without any restriction and recognize where a remote user is only to look own object. Comparative experiments show that interactions come to more efficient and smoother than usual methods by using our proposed system.
    Download PDF (1536K)
  • Masaharu Isshiki, Takahiro Sezaki, Katsuhito Akahane, Ken Kinomura, Ma ...
    Article type: Article
    2009 Volume 14 Issue 2 Pages 193-201
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Lately, a large-scale 3D space such as the online games and virtual city has been constructed as the improvement of the computer performance and network environment. Similarly, haptic devices of 6DOF which can be used to interface with objects in a 3D space intuitively have been developed. Operating a pointer in a 3D space with 6DOF haptic devices brings various advantages like an increasement of a real feeling or an intuitive operation by the sensing the force feedback. However, the technique to operate a pointer in a large-scale 3D space such as virtual city with 6DOF haptic devices has hardly been proposed. In this research, we proposed "Dual Shell Method" which can be used to operate a pointer intuitively in a large-scale VR space by automatically switching the state of the clutch. Results from a preliminary experiment suggest that our proposed method facilitates operation in a 3D space.
    Download PDF (1497K)
  • Kouichi Hirose, Takefumi Ogawa, Kiyoshi Kiyokawa, Haruo Takemura
    Article type: Article
    2009 Volume 14 Issue 2 Pages 203-211
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    This paper introduces interaction techniques for interactive reconfiguration of the transformation hierarchy among coordinate systems in the multi-viewport interface. The multi-viewport interface provides an arbitrary number of secondary views in window frames placed in a virtual environment, each showing the same or different virtual scenes with different perspectives. Using the multi-viewport interface, the user can perform a variety of object manipulation and user navigation operations between multiple virtual scenes seamlessly. The relationship among reference frames of a window frame, the primary view outside the frame, the secondary view inside the frame, and the user defines characteristics and usability of a virtual environment. We have thoroughly examined these relationships and proposed a framework. Through the experiments, we discuss how different transformation hierarchies of a multi-viewport interface have different impacts on user's behaviors and performance.
    Download PDF (2242K)
  • Takashi Okuma, Masakatsu Kourogi, Kouichi Shichida, Takeshi Kurata
    Article type: Article
    2009 Volume 14 Issue 2 Pages 213-221
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    By developing and evaluating science-museum guide systems, we have been investigating mobile information services based on the user's situation in indoor environments. Through those investigations, indicated are possibilities for realizing more understandable navigation based on well-considered 3-D map presentation methods and for providing content raising appeal of real exhibits. To clarify them more concretely, we conducted a subjective evaluation on virtual viewpoint control for 3-D map presentation, how-to experience instruction content for enhancing the real exhibits, and the entire guide system. As a result, in the respect of virtual viewpoint control, subjects preferred combination of enlarged view of the current position and automatic map rotation based on walking direction. However, depending on condition, combination of bird's-eye view and absolute direction presentation was also preferred. In addition, we found that how-to experience instructions raised the popularity of an unpopular exhibit room. Although the guide system was accepted favorably, issues in the future such as more context-aware content presentation and evaluation of renewal loop of the how-to experience instruction in practical services are clarified.
    Download PDF (2894K)
  • Masataka Niwa, Yuichi Itoh, Fumio Kishino, Haruo Noma, Yasuyuki Yanagi ...
    Article type: Article
    2009 Volume 14 Issue 2 Pages 223-232
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    In this paper, we explore the use of tactile apparent motion at different patterns and speeds for information displays. As the first step, we investigate stimulus conditions and the number of tactors to build information displays. As the second step, a prototype of tactor array consisting of five tactors, which is mounted on the upper arm of subjects, was constructed. In order to evaluate the system, experiments to measure the performance of users' ability to distinguish between multiple kinds of stimuli were conducted for two levels: physical level and semantic level. For the physical level, users' ability to distinguish four motion patterns at three different speeds was tested. For the semantic level, users' ability to identify four kinds of messages with three levels of importance, each of which corresponds to the combination of specific motion pattern and speed, was tested. In both experiments, users had little trouble with pattern and speed identification. Several ideas for future exploration of tactile apparent motion for general-purpose information displays are presented.
    Download PDF (1708K)
  • Hiroaki Tobita, Jun Rekimoto
    Article type: Article
    2009 Volume 14 Issue 2 Pages 233-240
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    The ActiveInk system integrates the advantages of real world painting techniques with computer graphics (CG) effects such as natural phenomena animations (e.g., water, fire, snow, and clouds), attributes (e.g., rubber, cloth, and land), surface materials (e.g., texture effects, metal, and glass), and so on. Most conventional paint systems mainly allow users to set a simple and static color. Also, they require users to control many parameters if the user applies complex effects. However, the ActiveInk system treats many behaviors as separate behavior inks (e.g., water, cloud, and cloth ink), so a user can add effects by selecting a behavior ink and painting it onto objects to realize CG effects. Moreover, the system has a palette area that is similar in function to an actual painter's palette, so the user can create new ink by mixing different types of behavior ink and can control the behavior in the palette area directly. All creative manipulations are based on painting to avoid the difficulties of traditional systems such as the need to deal with complex parameters and GUIs, so these simple manipulations can be applied to a wide variety of areas.
    Download PDF (2817K)
  • Yasushi Ikei, Hirofumi Ota
    Article type: Article
    2009 Volume 14 Issue 2 Pages 241-249
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    In this paper we propose a novel approach to augmenting human memory based on spatial and graphic information using wearable and smartphone devices. Mnemonics is a technique for memorizing a number of unstructured items that has been known for more than two millennia and was used in ancient Greece. Although its utility is remarkable, acquiring the skill to take advantage of mnemonics is generally difficult. In this study we propose a new method of increasing the effectiveness of classic mnemonics by facilitating the process of memorizing and applying the mnemonics. The spatial electronic mnemonics (SROM) proposed here is partly based on an ancient technique that utilizes locations and images that reflect the characteristics of human memory. We first present the design of the SROM as a working hypothesis that augments traditional mnemonics using a portable computer. Then an augmented virtual memory peg (vmpeg) that incorporates a graphic numeral and a photograph of a location is introduced as a first implementation for generating a vmpeg. In the experiment, subjects exhibited remarkable retention of the vmpegs over a long time period. In addition, the result of a subjective evaluation indicated that the mnemonics facilitated the memorability greatly with reduced cognitive effort.
    Download PDF (1596K)
  • Article type: Appendix
    2009 Volume 14 Issue 2 Pages 251-253
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (183K)
  • Article type: Appendix
    2009 Volume 14 Issue 2 Pages App1-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (33K)
  • Article type: Cover
    2009 Volume 14 Issue 2 Pages Cover2-
    Published: June 30, 2009
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (163K)
feedback
Top