The Journal of the Institute of Image Electronics Engineers of Japan
Online ISSN : 1348-0316
Print ISSN : 0285-9831
ISSN-L : 0285-9831
Current issue
Displaying 1-12 of 12 articles from this issue
  • Hayato WATANABE, Naoto OKAICHI, Masanori KANO, Hisayuki SASAKI, Jun AR ...
    2022 Volume 51 Issue 4 Pages 300-307
    Published: 2022
    Released on J-STAGE: December 25, 2023
    JOURNAL RESTRICTED ACCESS

    Light field displays can display naturally viewable three-dimensional (3D) images with smooth motion parallax without the use of special glasses by faithfully reproducing light ray information from objects. Because of this advantage, various display methods based on light field reproduction have been actively researched and developed. When focusing on the parallax direction, light field displays can be classified into two types: horizontal and full parallax. Although the full-parallax type has lower display characteristics, such as pixel density, than the horizontal parallax type, it has the potential to be used in a broader range of fields because 3D images can be viewed even when the viewer’s face is tilted. In this paper, we primarily review full parallax type light field display methods such as integral 3D display, Aktina Vision, and layered 3D display, while also describing the horizontal parallax type display methods. In addition, we briefly review volumetric displays which can display full-parallax 3D images based on a different principle from light field displays.

    Download PDF (2500K)
  • Jiaqing LIU, Shoji KISITA, Shurong CHAI, Tomoko TATEYAMA, Yutaro ...
    2022 Volume 51 Issue 4 Pages 309-317
    Published: 2022
    Released on J-STAGE: December 25, 2023
    JOURNAL RESTRICTED ACCESS

    Human walking patterns contain a wide range of non-verbal information including their identity and emotions. Recent work utilizing Spatial-Temporal Graph Convolutional Network (ST-GCN), which considers the inherent spatial connections between skeletal joints, has shown promising performance for skeleton-based emotion perception from gaits. However, the significance of the nodes may change depending on the emotions, which is not included in the current studies. Efficiently considering the significance of nodes for different emotions is a major issue in this task. To address this problem, a novel dual-attention module is proposed to assist ST-GCN in perceiving the correlation between nodes in this paper. Experimental results on Emotion-gait dataset demonstrate that our method outperforms the current state-of-the-art methods. We also visualize the attention-based weights of the nodes to better understand the importance of a node in emotion perception. We observe that the entire gait is light when people are happy. When being angry, the whole body moves violently with short strides. While sadness makes it difficult to move forward.

    Download PDF (4464K)
  • Takami YAMAMOTO, Masanori NAKAYAMA, Akira SAKAMOTO, Issei FUJISHIRO
    2022 Volume 51 Issue 4 Pages 318-326
    Published: 2022
    Released on J-STAGE: December 25, 2023
    JOURNAL RESTRICTED ACCESS

    This article is a follow-up to “3D Distance Field-Based Apparel Modeling”, which was published in IIEEJ Transactions on Image Electronics and Visual Computing. In the previous study reported in the paper, a virtual torso was generated by first left-right symmetrizing and smoothing a volumetrically represented human body model. Next, thresholding was applied to the 3D distance field derived from the volume model to develop clothing with a finely calculated spaciousness for women’s and men’s body prototypes and men’s vests. In this article, we show that an extension to the processing flow proposed in the previous paper allows an individualized actual torso that is sufficiently close to that of the human body to be developed for draping. To begin with, candidate threshold values to yield the same shape as the one data obtained from the 3D scanner were examined. A paired ttest and two-factor analysis of variance conducted using the 3D measurement data of 20 participants in the evaluation experiment indicated that the isosurface with a threshold of 0.4 from the volume datasets (0: outside the body, 1: inside the body) is the closest to the scanned human body shapes. Next, in order to enable the draping of closely fitted garments of various designs, the isosurface data with the threshold of 0.4 was 3D printed using Styrofoam cut to develop an individualized actual torso. Finally, using the obtained individualized actual torso, a close-fitting garment was created through draping, and its fit to the body shape was confirmed through trial fitting evaluations.

    Download PDF (2493K)
  • Akari YABANA, Makoto FUJISAWA, Masahiko MIKAWA
    2022 Volume 51 Issue 4 Pages 327-331
    Published: 2022
    Released on J-STAGE: December 25, 2023
    JOURNAL RESTRICTED ACCESS

    This paper proposes a method to reproduce daytime fireworks as CG animation using physical simulations. Daytime fireworks are a kind of fireworks and this paper focuses on daytime fireworks called ”Enryu”. Enryu is a firework that launches the parachute with the smoke candle and watch the smoke follow a spiral trajectory as the parachute falls. In this paper, we simulate Enryu by combining the simulation of the parachute and the smoke from the fluid simulation. In the simulation of the parachute, the top part is modeled as a parachute model based on aerodynamics, the bottom part is modeled as a single mass point, and the string connecting the two part is represented by spring and damper. We simulated smoke by grid-based fluid simulation, and Enryu is simulated by blowing smoke from the bottom part of the parachute.

    Download PDF (4910K)
  • Kyosuke FUJITA, Yuki MORIMOTO
    2022 Volume 51 Issue 4 Pages 332-337
    Published: 2022
    Released on J-STAGE: December 25, 2023
    JOURNAL RESTRICTED ACCESS

    There are a lot of video works that apply so-called Liquid motion. In liquid motion animation works, illustrations move and splash such as liquid. Animators generate such liquid motion animation works by hand or existing softwares however the former needs a lot of works and the latter induces inflexible motions and lacks expression. We propose the method to generate such effect animation easily from a static image and path to move droplets. In our method, an elastic simulation based on the position based dynamics allows expression by interaction between the input illustration and droplets. It also enables not to lack the original figures of input illustrations while applying metaballs and represent the features of the existing liquid motion effects.

    Download PDF (2355K)
feedback
Top