Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Volume 22, Issue 1
Displaying 1-15 of 15 articles from this issue
  • Takefumi Ogawa, Ryotaro Nakahari, Arinobu Niijima
    2017 Volume 22 Issue 1 Pages 3-10
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    There are many situations where we are doing simple tasks such as lifting or carrying something in daily life. Several studies have been conducted on supporting method for physical work. These methods have problems about cost and effectiveness. In this paper, we propose weight perception control methods using electrical muscle stimulation (EMS). We investigated the change of weight perception when electrical signals are delivered to the muscles to be contracted in lifting objects. Subjective experiment results showed that subjects perceived objects lighter when electrical signals are delivered to their the biceps brachia muscle or the forearm muscles. It was suggested that our methods are effective for supporting physical work.

    Download PDF (1442K)
  • Taku Hachisu, Yadong Pan, Tadayuki Tone, Baptiste Bourreau, Kenji Suzu ...
    2017 Volume 22 Issue 1 Pages 11-18
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    We present a novel head-mounted device for measuring face-to-face behavior. The device has unique capabilities: measuring timing and duration of face-to-face behavior as well as an identity of the partner, being used by multiparty, and automatically logging the behavior by connecting with Android device. We employ infrared communication technique for measuring the behavior. A pilot experiment with two dummy heads shows that the developed devices can detect the face-to-face behavior with a certain angle. We also conducted an experiment with human participants. The events recorded by the developed device show moderate agreement with those coded from video by human observers.

    Download PDF (3453K)
  • Hiroshi Mori, Yuuta Sugawara, Fubito Toyama, Kenji Shoji
    2017 Volume 22 Issue 1 Pages 19-26
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    The availability of low-cost depth sensors has facilitated motion capture applications for personal use. Human motion can be captured to control an animated avatar, with no requirement to wear the dedicated sensors used in camera-based motion capture systems. However, users must contend with avatar pose errors resulting from user pose recognition failures, which can be caused by usage environment problems or measurement errors. In this paper, we propose a simple method for synthesizing seemingly natural avatar motion based on the user's body movements, including user pose recognition failures. First, we calculate the degree of confidence for each joint's pose parameters that are captured using a depth sensor. Next, the low confidence joint poses are replaced with a similar pose that is calculated based on high confidence joint poses. In addition, the joints that are not detected are complemented with a calculated, similar pose. As the result, seemingly natural avatar motion, based on high confidence user movements, can be synthesized.

    Download PDF (3518K)
  • Hideaki Touyama, Junwei Fan
    Article type: Paper
    2017 Volume 22 Issue 1 Pages 27-30
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    This paper describes a technique for decision by majority by applying brain signal analyses. The ElectroEncephaloGram (EEG) of twenty-four volunteers were recorded with the serial presentations of Computer-Generated (CG) images of human emotional faces. We focused on the Event-Related Potential (ERP) P300 signals and the amplitude was investigated varying the ratio of collaborative P300 occurrences in the group. The supervised machine learning technique was used to perform the decision by majority and the estimation performance value could be almost 80%. This novel concept would be applicable to the decision by majority for Computer-Supported Cooperative Work (CSCW) such as Virtual Reality (VR) interactions only by means of thinking.

    Download PDF (1405K)
  • Yu Myojin, Yuki Hashimoto
    2017 Volume 22 Issue 1 Pages 31-40
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    Urushi (Japanese lacquer) is a natural material that is not only has decorative beauty but also has many useful features for engineering field, such as electrical resistance and water resistance. In this research, we have created electronic circuit using Urushi as an insulating material and structure (Urushi Circuit). In addition, we indicated that it is possible to provide the interactive features while enjoying original beauty and texture of Urushi. We aimed at a multilayered circuit to create a high performance Urushi Circuit. Thus, we proposed and developed an Urushi processing method (Ultraviolet Irradiation Method) by ultraviolet light (UV). In this paper, the accuracy of the verification experiment is enhanced to improve the processing accuracy of the Ultraviolet Irradiation Method. Based on our data, we created a model to describe Urushi dissolved to facilitate good reproducibility and controllability. Next, the reproducibility of our model was verified by performing UV irradiation tailored to the target value. Further, we fabricated an Urishi Dissolution System using Ultraviolet Irradiation Method. We confirmed the feasibility of our system by performing UV irradiation tailored to the target value at optional positions.

    Download PDF (3526K)
  • Sho Mitarai, Nagisa Munekata, Tetsuo Ono
    2017 Volume 22 Issue 1 Pages 41-50
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    In recent years, gesture recognition using surface-electromyogram (sEMG) has become popular and these approaches can classify gestures at a high recognition rate. In particular, easily-mountable sEMG based gesture recognition devices have been developed. When developing such devices for use in daily lives, it is necessary to consider situations where users will be using the device while gripping objects, such as umbrellas, bags and so on. By using sEMG of the forearm, it is possible to still collect data without intruding on what the user is doing in these situations. However, sEMG based gesture recognition while gripping an object still lacks sufficient investigation. Therefore, in this study, in order to investigate the feasibility of gesture input while gripping an object, we performed an experiment to measure recognition accuracy for four hand gestures of users while they are holding a variety of objects. From the results, we discuss both feasibility and problems of gesture input while gripping objects and propose a new approach to resolve those problems.

    Download PDF (3927K)
  • Jun Nishida, Kenji Suzuki
    2017 Volume 22 Issue 1 Pages 51-60
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    This paper presents a novel wearable kinesthetic I/O device that achieves blending kinesthetic interaction among people. The users are able to perceive muscle activity bi-directionally, such as muscle contraction or rigidity of joints, by connecting users' body. The kinesthetic information is exchanged between users through somatosensory channels by biosignal measurement and stimulation. We have developed a wearable haptic I/O device, called bioSync, that equips a developed electrode system for enabling the same electrodes to perform biosignal measurement and stimulation at 100Hz. Potential scenarios using kinesthetic synchronization include interactive rehabilitation and sports training. It is essential for both the trainers and the learners to perceive not only the physical bodily motions but also the muscle activity. The methodology, performance evaluations, user studies and potential scenarios are described in this paper.

    Download PDF (3080K)
  • Iwane Maida, Tetsuro Ogi, Tetsuya Toma
    2017 Volume 22 Issue 1 Pages 61-69
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    Telesurgery by man-machine interaction is currently limited to experimental trials, partly because the visual delay during transmission interferes with the surgeon's hand-eye coordination. The purpose of this study is to examine effect of performance in minute pointing with visual delay. In this experiment, range of visual delay was decided based on previous telesurgery cases, and index of difficulty was prepared based on Fitts law. We conducted tests in which subjects performed pointing operations while visual delay interfered with their hand-eye coordination. The delay levels range from 131 ms to 598 ms by 67 ms. The results showed that Fitts law is able to be applied up to aproximately 500 ms. In addition, it was suggested that another threshold existed between 331 ms and 398 ms. These two thresholds can be considered as changing points of motion strategies.

    Download PDF (2403K)
  • Kosuke Sato, Jun Nishida, Hikaru Takatori, Kenji Suzuki
    2017 Volume 22 Issue 1 Pages 71-80
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    In this research we propose a wearable suit for embodiment transformation, which virtually realizes a child's experience while preserving the user's interactions and perceptions. The embodiment transformation suit consists of a viewpoint translator and passive hand exoskeletons. The viewpoint translator simulates a child's point of view (POV) by using a pan-tilt stereo camera attached at the waist position and a head mounted display (HMD). The pan-tilt mechanism follows the user's head behavior. The passive hand exoskeletons simulate a child's tiny grasping motion by using multiple quadric crank mechanisms and a child-size rubber hand. Virtualized child's embodiment through our own body will provide opportunities to feel and understand a child's perception and recognition, to evaluate products and spaces such as hospitals, public facilities and homes from the aspect of universal design. This paper describes the system design and implementation of the viewpoint translator and the exoskeletons, and assessment of them based on user's feedback in exhibitions.

    Download PDF (2605K)
  • Hiroto Saito, Kentaro Fukuchi
    2017 Volume 22 Issue 1 Pages 81-90
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    Previous studies of the recognition process of self-attribution conducted that it is mainly caused by congruence between visual and proprioceptive information, and congruence between visual and efferent information, but the studies did not declare which plays the primary role in the process during the voluntary movements. We conducted a user study that distinguishes proprioceptive information and efferent information. Subjects experienced active handle movements to control a graphical object displayed on a screen. The rotation speed had been modified to make mismatch between visual and proprioceptive information. The result indicates that efferent information plays a primary role in the recognition process of self-attribution.

    Download PDF (2397K)
  • Yuko Watanabe, Yusuke Ikeda, Shiro Ise
    2017 Volume 22 Issue 1 Pages 91-101
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    In this paper, we present the development of a virtual-sound table-tennis system using a 3D-immersive auditory display based on the boundary-surface control (BoSC) principle. Sound table-tennis is a modified version of table-tennis for visually impaired people, in which the players are required to roll a ball from one end of the table to the other, instead of hitting ball over the net. Using a sound-ball and special racket, players may hit the ball by listening for the direction from which the ball is rolling towards them. Our proposed system reproduces the rolling sound by using 3D-immersive auditory display, referred to as a ‘Sound Cask’, and the player is asked to assess the direction of the ball by perceiving the virtual sound source to return the ball. Using a motion sensor, the system detects the hitting action of the player and reproduces the rolling sound. We introduce system configurations and design methods and conduct experimental studies to confirm the applicability of the system. From experimental results, localization was found to improve after the training session of virtual-sound table tennis.

    Download PDF (1449K)
  • Daiki Arakawa, Hirotaka Ogai, Yuichiro Fujimoto, Kinya Fujita
    2017 Volume 22 Issue 1 Pages 103-112
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    To achieve multi-fingered assembly task with limited visual information in VR environment, the system needs to allow user recognize the contact status between manipulated object and other object through haptic information. In VR environment where Virtual Coupling (VC) is jointly used with ungrounded force display devices, the parameter of VC should be set as low to avoid producing excessive force. However, the decreased parameter potentially disrupts the recognition of object rotation. In this research, we propose Rotational component emphasizing Virtual Coupling (RVC), which extracts the distance due to the rotation between the user and VR hands, and emphasizes it by applying a larger VC coefficient. We assessed the recognition accuracy of inclination angles of invisible planes using fingertip-worn substitutive-type ungrounded force display devices in twelve participants. The results revealed that RVC outperforms conventional VC in the recognition of contacting surface through a manipulated object.

    Download PDF (3069K)
  • Yasunaga Monno, Hirohiko Kaneko
    2017 Volume 22 Issue 1 Pages 113-123
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    Using ambiguous motion stimulus, it has been reported that motion perception is affected by voluntary action. In this study, we investigated the effect of action on speed perception of motion. In Experiment 1, we measured the perceived speed of the moving object, which linked with participants' hand movement in the timing, motion direction and speed, and found that the faster the participant moved the hand, the faster the perceived speed was. In Experiment 2, we measured the perceived speed of the object linked to hand movement in the timing and motion direction, not speed. In this case, perceived speed was independent of hand speed when the object had the same speed with the hand. Therefore, the results suggest that speed perception as a consequence of action changed by the intention for object speed in the hand movement.

    Download PDF (1544K)
  • Takeshi Tanabe, Hiroaki Yano, Hiroo Iwata
    2017 Volume 22 Issue 1 Pages 125-134
    Published: 2017
    Released on J-STAGE: March 31, 2017
    JOURNAL FREE ACCESS

    We here propose a device that can display a translational force and torque using two vibration speakers. Each vibration speaker generates an asymmetric vibration when a sound with an asymmetric amplitude is input. Asymmetric vibrations induce perceived forces that pull or push a user's hand in a particular direction. The user perceives a translational force and/or torque with his/her thumb and index finger based on the direction and amplitude of each speaker's force. We conducted four evaluation experiments to evaluate the mechanical properties and the proprioceptive sensations. As result, we confirmed that the proposed device can present the translational force with a maximum value of 0.5 N and the torque with a maximum value of 9.5 N·mm.

    Download PDF (1602K)
feedback
Top