日本バーチャルリアリティ学会論文誌
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
25 巻, 2 号
選択された号の論文の8件中1~8を表示しています
「複合現実感8」特集
特集論文
  • 大城 和可菜, 鏡 慎吾, 橋本 浩一
    原稿種別: 基礎論文
    2020 年 25 巻 2 号 p. 108-116
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    Low-latency projection is a key technology for fast motion-adaptive projection. Digital Micromirror Devices (DMDs) are widely used for this purpose because they enable high frame rate projection of binary patterns, although additional techniques are needed to realize projection of multi-valued images. This paper proposes a low-latency projection method, Binary Frame Warping, with which displayed patterns are warped at the binary pattern rate instead of the video frame rate. Experimental results suggest that the proposed method applied to 60-fps video input offers perceived image quality comparable with that offered by over 500-fps projection.

  • 天野 敏之, 村上 巧輝
    原稿種別: 基礎論文
    2020 年 25 巻 2 号 p. 117-126
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    This paper proposes a method for alternating the appearance of an object not only to different colors but also to different illumination distribution with reflectance analysis using multiple projectors and multiple cameras. Such viewing direction-dependent appearance manipulation enables to alternate perceptual surface reflectance to other objects, and it also allows to provide a material perception of structural color such as morpho butterfly, metallic reflections. For this manipulation, we propose a reflection model that describes the optical response among multiple projectors and multiple cameras. We also propose methods for calculating the reflectance matrix and the optimized projection images using non-negative-minimization. Through experimental results on the non-Lambert reflection surface, we confirmed our method allowed viewing-direction dependent appearance manipulation that changes to designed color.

  • 中島 武三志, 植井 康介, 飯田 隆太郎
    原稿種別: 基礎論文
    2020 年 25 巻 2 号 p. 127-137
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    Cross-modality is applied as a haptic feedback method in a virtual / mixed reality (VR/MR) environment using a head mount display (HMD). Such a haptic feedback method based on cross-modal illusion is seen as a method reducing burden on the user. Particularly, there are reports of haptic sensory illusion that occurs when a virtual object is touched in MR environment which can be applied to haptic feedback methods. But there is a problem that the haptic sensory illusion obtained from this method is weak. In anticipation of a more reliable haptic feedback method, we focused our research on hearing as it is a modality co-occurring with the sense of touch. In this paper, we aim to understand the influence of auditory stimuli on haptic sensory illusions when touching virtual objects in an MR environment. In particular, we considered the influence of adding simulated contact sound when a user touches a virtual object with his or her palm on the magnitude and impression of the haptic sensory illusion. Regarding magnitude, the haptic sensory illusion was increased from adding auditory stimuli. Regarding impression, a soft impression changed to a hard impression by adding auditory stimuli, in one of the trials.

  • Chun Xie, Hidehiko Shishido, Yoshinari Kameda, Itaru Kitahara
    原稿種別: Paper
    2020 年 25 巻 2 号 p. 138-147
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    Projector calibration is a cumbersome process when building a spatial augmented reality system. Many methods were proposed to simplify the process by using a projector-camera system (PROCAMS), and structured light (SL) projection. Conventional PROCAMS calibration methods use one or more stationary cameras. The camera position needs to be carefully chosen to deal with the problem of trade-off between occlusion and depth error. The camera installation and setup also cost extra labor and time. In this paper, we propose a geometric calibration method using a mobile camera. Unlike conventional methods that use temporal coded SL, we use a spatial coded SL so that it can be decoded in each camera frame. We take advantage of the multiview images of the projection surface, and use two types of image features, to achieve a robust calibration result. Experiments show that the result of our method is comparable with that of a checkerboard-based approach.

  • 平木 剛史, 川原 圭博, 苗村 健
    原稿種別: 総説論文
    2020 年 25 巻 2 号 p. 148-157
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    Collaborative control system between digital images and multiple robots has attracted increasing attention in the robot environment for displaying information to users. We surveyed the projection-based robot control method for enabling this control system and its applications in the fields of mixed reality and user interfaces. In this paper, we described the requirements of the robot environment for displaying information to users, the related studies of the projection-based robot control system as user interfaces, and the robot control methods using velocity vector fields that was applied in the projection-based robot control. In addition, we described the characteristics and applications of the projection-based robot control system using pixel-level visible light communication.

一般論文
  • Tao Tao, Photchara Ratsamee, Jason Orlosky, Yuki Uranishi, Haruo Takem ...
    原稿種別: Paper
    2020 年 25 巻 2 号 p. 158-168
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    Observing rapid movements, such as the key moments needed to correctly judge fouls in sports, is a challenging task for referees. Though assistive technology like 2D slow-motion replays exist, users are still limited by the viewing angle and degree of interaction with these videos.

    In this paper, we propose an interactive 3D vision augmentation framework called MomentViz that allows for both high frame rate recording and interactive time-control in 3D space. This system is designed to allow users not only to freely observe dynamic 3D motion from different viewpoints but also to control time from any given viewpoint. To accomplish this, we start by fusing high FPS RGB and depth data to carry out 3D reconstruction. Through a VR HMD, time can then be manipulated in a user selected sub-region of interest of the point cloud by raycasting with a controller. To evaluate this framework for interaction, we conducted experiments testing participants’ judgments of rapid movements. Results showed that MomentViz outperforms conventional visualizations in terms of accuracy, required views, and subjective user experience.

  • 油井 俊哉, 橋田 朋子
    原稿種別: ショートペーパー(コンテンツ)
    2020 年 25 巻 2 号 p. 169-172
    発行日: 2020年
    公開日: 2020/06/30
    ジャーナル フリー

    We have developed a system, called “Curating Frame”, that transforms everyday things into works of art. We achieved this function by using machine recognition, in this case false recognition, to let the picture frame move autonomously so that it stays in front of the target objects and generates slightly shifted titles. When art supplies such as picture frames and captions are placed in the field of view, people can easily misapprehend the surroundings behind them as works of art. A slightly shifted title will stimulate people’s imagination and make it easier for them to reinterpret ordinary objects.

feedback
Top