IIEEJ Transactions on Image Electronics and Visual Computing
Online ISSN : 2188-1901
Print ISSN : 2188-1898
ISSN-L : 2188-191X
Volume 8, Issue 2
IIEEJ Transactions on Image Electronics and Visual Computing
Displaying 1-5 of 5 articles from this issue
  • Mei KODAMA
    Article type: Contributed Papers-- Special Issue on Extended Papers Presented in IEVC2019 Part Ⅱ --
    2020 Volume 8 Issue 2 Pages 79-90
    Published: December 15, 2020
    Released on J-STAGE: March 31, 2021
    JOURNAL FREE ACCESS

    If videos have screen shake information, it is one of the important issues to prevent viewers from VIMS (visually induced motion sickness). So far there are two major approaches to prevent them. First approach is a visually sickness information extraction method by using bio-metric information, and since it is necessary to extract it after the physical condition becomes poor, the processing delay is inevitable. On the other hand, second approach is a motion information extraction method by using image processing in videos. However, it is reported that the processing time becomes longer when the detailed motion such as global motion estimation is analyzed. Thus, a screen shake determination method, which had used the block matching method as a simple motion analysis, motion direction histograms, and this similarity, had been proposed. However, there is still the problem that the accuracy of detecting screen shake decreases, when the amount of screen shake is small in the conventional method. It cannot extract the direction information of screen shake. To solve the problems, this paper proposes a novel screen shake determination method based on 2D motion histogram analyses. In particular, there are three features: the use of gaze areas, the group transition analysis of maximum frequency, and the maximum group ratio analysis in this method. A new evaluation value Ev is defined in consideration of both the accuracy of no-swing and pseudo swing images. Simulation experiments show that the Ev in the proposed method is at most 4.02 smaller than that in the conventional method for the small screen shake. Therefore, it is revealed that the proposed method improves the accuracy of detecting the small screen shake in the conventional method and can extract the direction information of screen shake. Furthermore, it is shown that it solves the problem of setting the threshold of the histogram correlation. An adaptive method for each gaze area, and an adaptive method for each number of directions and divisions in motion vector space will deserve for consideration, but they are left for further studies.

    Download PDF (1619K)
  • Shinji MIZUNO
    Article type: Contributed Papers-- Special Issue on Extended Papers Presented in IEVC2019 Part Ⅱ --
    2020 Volume 8 Issue 2 Pages 91-99
    Published: December 15, 2020
    Released on J-STAGE: March 31, 2021
    JOURNAL FREE ACCESS

    In this paper, the author developed a method to generate 3DCG models of trains and cars by drawing their simple pictures with pens on papers. The author also developed a system that allows us to see the generated 3DCG models of trains and cars running in a 3DCG diorama three-dimensionally. The author created digital contents applying the developed method and system, and exhibited them at events hosted by a railway company and an automobile company. With this content, the users could run 3DCG models of train or car in the CG diorama and watch them immediately by just drawing pictures of vehicles on papers with pens. More than 300 children experienced the contents at each event, and enjoyed creating 3DCG vehicles by drawing and watching them. The author confirmed that the proposed method was useful to create interactive contents which attract many children at events.

    Download PDF (7486K)
  • Eri YOKOYAMA, Hiroshi SUNAGA, Makoto J. HIRAYAMA
    Article type: System Development Paper-- Special Issue on Extended Papers Presented in IEVC2019 Part Ⅱ--
    2020 Volume 8 Issue 2 Pages 100-108
    Published: December 15, 2020
    Released on J-STAGE: March 31, 2021
    JOURNAL FREE ACCESS

    This paper proposes two e-learning applications specially designed for classical Japanese literature classes. The first one is a groupware allowing users to put comments on the handscroll images where a part of the handscroll is shown as one scene, and users can put memo cards on it and the scene with them moves when scrolled. The devised point of the application is that message cards can be placed in any point of the handscroll, and can be moved and modified, and the processed data can be stored in the database. The other is a jigsaw puzzle game using classical literature images. The shape of each piece is same and rectangular, but users must look at the detail of each piece to accomplish the puzzle game, and it helps the students watch the literature images seriously. Students who have actually used these applications say they have been helpful in learning literature and it can be said that they effectively work to help get unmotivated students interested in literature classes. It was also acknowledged that they have enhanced their interest in programming techniques as well, through playing these games.

    Download PDF (3726K)
  • Haoqi GAO, Koichi OGAWARA
    Article type: Contributed Paper-- Special Issue on CG & Image Processing Technologies for Automation, Labor Saving and Empowerment --
    2020 Volume 8 Issue 2 Pages 110-120
    Published: December 15, 2020
    Released on J-STAGE: March 31, 2021
    JOURNAL FREE ACCESS

    In training deep neural networks for supervised learning tasks, we often use data augmentation methods to increase training dataset sizes. Furthermore, this technique is particularly useful when the size of the training datasets is small, such as when the content of the training datasets includes privacy issues that cannot be made public, or the categories in the train datasets are unbalanced. During the training process, small training datasets will lead to model overfitting. Nowadays, data augmentation methods using Generative Adversarial Network (GAN) and Neural Style showed to provide performance improvements for the task of supervised learning. However, the traditional GAN is easy to cause the network to collapse,which makes the generation process free and uncontrollable. Hence, it may cause the network model to fail to produce deterministic results, which makes the application limited. In this paper, we propose an improved GAN-based data augmentation method for image classification tasks. We compare our model with the latest GAN model, and the results show that our algorithm is effective. On generated images,when applying synthetic images to facial expression attribute classification task, our method achieves 72.5% accuracy rate on the FER2013 PrivateTest datasets and 71.2% accuracy rate on the FER2013 PublicTest datasets.

    Download PDF (4288K)
  • Jaime SANDOVAL, Kazuma UENISHI, Munetoshi IWAKIRI, Kiyoshi TANAKA
    Article type: Contributed Paper
    2020 Volume 8 Issue 2 Pages 121-135
    Published: December 15, 2020
    Released on J-STAGE: March 31, 2021
    JOURNAL FREE ACCESS

    Sphere detection in point clouds is an important task in 3D computer vision with various applications such as reverse engineering, medical imaging, Terrestrial Laser Scans (TLS) alignment, and so on. So far, several approaches have been proposed to detect spheres in point clouds. However, conventional methods are inefficient and inaccurate because they depend on random sampling, point-wise voting or normal vectors estimation to generate hypothetical spheres. To overcome these drawbacks, we propose a novel algorithm that employs sliding voxels and Hough voting to robustly and efficiently detect spheres in unorganized point clouds. The proposed method can analyze all the points contained in point clouds without deteriorating its efficiency and accuracy in contrast to conventional methods. Through experiments, we found that the proposed method can drastically reduce the processing time and achieve more accurate and robust performance in severer conditions than conventional methods.

    Download PDF (5204K)
feedback
Top