Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Volume 19, Issue 3
Displaying 1-21 of 21 articles from this issue
  • Article type: Cover
    2014 Volume 19 Issue 3 Pages Cover1-
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (41476K)
  • Article type: Index
    2014 Volume 19 Issue 3 Pages Toc1-
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (144K)
  • Article type: Index
    2014 Volume 19 Issue 3 Pages Toc2-
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (75K)
  • Takeshi Kurata, Nobuchika Sakata, Yutaka Kanou
    Article type: Article
    2014 Volume 19 Issue 3 Pages 307-
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (143K)
  • Article type: Appendix
    2014 Volume 19 Issue 3 Pages 308-
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
  • Hiroshi Yasuda, Yoshihiro Ohama
    Article type: Article
    2014 Volume 19 Issue 3 Pages 309-314
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    This study specifically examines wall see-through visualization for drivers at blind corners to prevent crossing collisions. We believe that realizing the desired effect with the simplest visualization is a key to building practical systems, although previous studies mainly targeted rich visualization as if the wall were actually transparent. We compared several visualization levels using qualitative and quantitative measures based on performance of the driver's collision estimation in both central and peripheral vision. Additionally, we analyzed if the difference of the displayed areas and previous knowledge affects differentiation of the visual stimuli. The results revealed that displaying only the direction of the obscured vehicle by a small circle is sufficient for collision estimation, although it was perceived as less informative. Similarly, the difference of the displayed areas did not have a significant effect for the collision estimation performance. We also obtained a result indicating that having previous knowledge on the types of visual stimuli possibly affects the differentiation of them in peripheral vision.
    Download PDF (11141K)
  • Kazuma Aoyama, Hideyuki Ando, Hiroyuki Iizuka, Taro Maeda
    Article type: Article
    2014 Volume 19 Issue 3 Pages 315-318
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    In this work, we researched the human perceptual attribution for acceleration sensation evoked by Galvanic Vestibular Stimulation (GVS) and validated the enhancement effect of countercurrent using the dead current strength. In the results of experiments, we searched that the current threshold is 0.3 - 0.4mA by measuring the rate of correct directional perception. To validate the enhancement effect of countercurrent, we compared the rate of correct perception in long and short countercurrent and constant current. In the result, the rate of correct directional perception in long countercurrent is significantly higher than that of constant current in the dead current value.
    Download PDF (4087K)
  • Hiroyuki Kawakita, Toshio Nakagawa, Makoto Sato
    Article type: Article
    2014 Volume 19 Issue 3 Pages 319-328
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    We propose a new TV system which is able to augment representation of TV programs beyond the TV screen. In the system, which we named augmented TV, since animated 3DCG content interlocked with TV programs is overlaid on live video from the mobile device camera in the mobile device screen by augmented reality techniques, the representation of having a TV character coming out of the screen can be provided. We developed an accurate synchronization method and authoring environment of augmented TV content. We implemented augmented TV, and confirmed frame-accurate synchronization (synchronization error time is about 0.03 seconds or less). And we confirmed that the authoring environment is easy to produce augmented TV content.
    Download PDF (14863K)
  • Shogo Sato, Itaru Kitahara, Yuichi Ohta
    Article type: Article
    2014 Volume 19 Issue 3 Pages 329-338
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Facial expression, which is one of the most important non-verbal communication media, makes communication smoother by compensating the missed message in language communication. However, some shy people are not as good at using facial expressions as they want. Such poor emotional expression by the conversation partner makes it difficult to read feelings correctly, and as the result, smooth communication is hindered. In order to solve this problem, this paper proposes a facial expression enhancement method that realizes the smooth communication with rich facial expression. To enhance facial expression, the facial shapes and textures are expressed as the parameters in parametric spaces reconstructed from the personal facial images naturally. In the parametric spaces of facial expression, the difference between two facial expressions can be handled as a multidimensional vector. By controlling the scale of the difference vector between input vector and the norm vector, the facial expression difference is enhanced without having to recognize the facial expression. Then we generate the texture, which enhanced the texture of expression by re-projecting into image space. Finally, we overlay the synthesized facial image of the conversation partner onto the face region in the video chatting sequence. We conduct on evaluations to confirm the effect of expression enhancement by our method using CG faces and a real video sequence.
    Download PDF (22258K)
  • Yuki Ueba, Nobuchika Sakata, Shogo Nishida
    Article type: Article
    2014 Volume 19 Issue 3 Pages 339-347
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    In these days, the demand of 3D modeling for ordinary people has been increased. In case of using existed hand-held 3D scanner, users have to estimate unmeasured area and terminate scanning by watching a preview of scanning. Thus, many user operations such as estimating unmeasured spot and moving the hand-held device impose burdens to users. In this paper, we propose a novel 3D scanner which provides route guidance for users by means of area limitation of scanning at the beginning. By area limitation of scanning, users can obtain desired 3D model with watching effective route guidance and automatic termination of scanning. To realize route guide and automatic termination, we propose new method to find unmeasured and measurable spot. We conduct experiment to investigate required time, and mental effort by means of area limitation. As a result, our proposed method can realize 3D scanning with low burden effectively, easily and quickly.
    Download PDF (18058K)
  • Kohei Okahara, Shuhei Ogawa, Takuya Shinmei, Daisuke Iwai, Kosuke Sato
    Article type: Article
    2014 Volume 19 Issue 3 Pages 349-355
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    In this paper, we present a study on projection representation of extended hand for augmented body interface. Augmented body interface enables a user to manipulate remote appliances such as a TV, air conditioner, light, and so on, from a place that is 10-foot far from the appliances. Extended hand is one of the implementations of the augmented body interface, which is the projected graphics of a user's hand on an ordinary surface by a projector. The user can control the position of the extended hand by slightly moving his/her hand; and its posture follows the user's hand gestures such as grasping and releasing. If the user perceives the extended hand as his/her own hand, the augmented body interface by the extended hand provides a high degree of usability. In this paper, we investigate the self-ownership of extended hand through a psychological experiment that was widely used in the research field of rubber hand illusion. Then, we evaluates the best graphical representation of the extended hand among five different representations through a grasp-and-drop task. Through the experiments, we confirm that the self-ownership occurs for extended hand, and that the graphical representation in which projected extended hand is graphically connected to the real hand provides the best usability.
    Download PDF (14561K)
  • Shogo Ujihara, Masaki Omata
    Article type: Article
    2014 Volume 19 Issue 3 Pages 357-366
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    If we could know whether an intended party can answer a phone call or not, we could avoid calling which the intended party couldn't answer. However, we can't know it before calling. This paper describes development of a system that estimates possibility of answering phone call of a user by detecting and analyzing user's status from data of sensors of his/her smartphone. In addition, this paper describes an experiment to evaluate the effect of showing the possibility. The results show that the system is effective in a day when both possible hours of answering phone call and impossible hours of it are included.
    Download PDF (4374K)
  • Tomoko Hashida, Kohei Nishimura, Takeshi Naemura
    Article type: Article
    2014 Volume 19 Issue 3 Pages 367-375
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    We have developed a hybrid writing and erasure system called Hand-rewriting in which both human users and computer systems can write and erase freely on the same piece of paper. When the user writes on a piece of paper with a pen, for example, the computer system can erase what is written on the paper, and additional content can be written on the paper in natural print-like colors. We achieved this hybrid writing and erasure on paper by localized heating combined with handwriting with thermochromic ink and localized ultraviolet-light exposure on paper coated with photochromic material. This paper describes our research motivation, design, and implementation of this interface and examples of applications.
    Download PDF (22060K)
  • Kohei NISHIMURA, Naoya KOIZUMI, Tomoko HASHIDA, Takeshi NAEMURA
    Article type: Article
    2014 Volume 19 Issue 3 Pages 377-385
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Paper is thin, light, portable and written on freely. This paper augments computing technology on paper with maintaining these advantages. Our past proposal was the system to display additional contents in natural print-like colors on paper by using UV projector and photochromic material which represents color when accepts UV light. To realize multiple color representation on this monochrome system, we introduce a new inkjet photochromic material and realize gradation by PWM and CMY representation by juxtaposition color mixture. Besides, to calculate limitation of this system and to demonstrate the fundamental directions, we discuss conditions in the situation where the external stimuli to control color locates separately to the pixels on display. Finally, we present practical applications.
    Download PDF (20607K)
  • Yurina TAKATA, Hidenori WATANABE
    Article type: Article
    2014 Volume 19 Issue 3 Pages 387-395
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    The purpose of this study is to develop a map navigation system which shows variable map according to the user's spatial perception. In order to achieve this purpose, first of all, we developed a map navigation system which toggles a viewpoint as a prototype. The result of an experiment using the prototype showed that the system had some effects. Then we proposed a categories method using Sense of Direction Questionnaire-Short Form (SDQ-S) by experiments of SDQ-S and sketch maps. As a result, we developed a map system which changes a viewpoint, a rotation, and an alert function depending on the user's spatial perception pattern categorised by SDQ-S. This system was implemented as a smartphone application.
    Download PDF (7282K)
  • Keita Higuchi, Jun Rekimoto
    Article type: Article
    2014 Volume 19 Issue 3 Pages 397-404
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    We propose a flying telepresence platform, which can augment an operator's moving sensation. In this system, the operator's natural movement can be synchronized with quadcopter motions such as rotation and horizontal and vertical movements. Thus, the operator can intuitively control the quadcopter using his/her kinesthetic imagery. To augment the moving sensation, the system also changes mapping of moving distance between the operator and the quadcopter. We perform user studies to report operability and experience in different mapping. We also discuss applications of the flying telepresence such as surveillance, and entertainment.
    Download PDF (11817K)
  • Asako Soga, Masahito Shiba, Yusuke Niwa, Yoshihiro Okada
    Article type: Article
    2014 Volume 19 Issue 3 Pages 405-412
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    We have been archiving Buddhist ceremonial processions called Nerikuyo. Nerikuyo has special features in its objects and actions. It is difficult to display these features using traditional panels in a museum. Our purpose is to create videos and interactive content that vividly portray this ceremony. We have archived two Nerikuyo ceremonies with super-high-detail videos and then created video contents for a special exhibition on Nerikuyo. We have also proposed a virtual fitting system that recognizes gestures of users and then displays the corresponding images and sounds over the captured images of the users. All of the gestures are related to the poses or motions of Nerikuyo, and they are assigned to masks and tools. The created videos were shown at a special exhibition of the Ryukoku Museum, and the proposed system was demonstrated for three days as one of the events related to the special exhibition.
    Download PDF (18652K)
  • Kento Yamazaki, Fumihisa Shibata, Asako Kimura, Hideyuki Tamura
    Article type: Article
    2014 Volume 19 Issue 3 Pages 413-422
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    This paper describes prototyping of a mixed reality (MR) order picking system which is used for a warehouse. Recently, MR, a technology that superimposes computer generated images (CGI) onto the real world in real-time, is used in various fields. MR technology has a variety of practical uses including assistance in machine maintenance and repair, parts assembly for industrial products, and so on. Especially, giving instructions to a worker using MR technology is used for wide range of applications. In this study, we try to make an MR based order picking system as one of the practical applications of MR technology. First, we analyze a conventional order picking system which employs colored lights mounted on racks for indicating picking items. Then, we design a prototype MR order picking system based on the result of the analysis. In our system, it is possible to provide intuitive information to the worker by displaying CGI such as an arrow through an HMD. We conducted experiments using both the conventional system and MR system to make a comparison of them and found some challenges for the future.
    Download PDF (17666K)
  • Ken Nakagaki, Yasuaki Kakehi
    Article type: Article
    2014 Volume 19 Issue 3 Pages 423-432
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    "Drawing" is one of the most familiar creations for us. In this research, by adding digital manipulation into the mechanism of a tool "compass", we propose a drawing interface which enables users to draw various figures onto a physical paper easily. Also, by focusing on the another function of compass, "measuring distances", we have developed an copy and paste function which enable us to measure figures in the real world and duplicate it instantly onto a paper. Through this research, we aim to enrich drawing under physical environment by releasing the computational aids from displays, and integrating them into our daily tools seamlessly.
    Download PDF (21637K)
  • Article type: Appendix
    2014 Volume 19 Issue 3 Pages 433-436
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (228K)
  • Article type: Cover
    2014 Volume 19 Issue 3 Pages Cover2-
    Published: September 30, 2014
    Released on J-STAGE: February 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (94K)
feedback
Top