The Journal of the Institute of Image Electronics Engineers of Japan
Online ISSN : 1348-0316
Print ISSN : 0285-9831
ISSN-L : 0285-9831
Volume 48, Issue 4
Displaying 1-20 of 20 articles from this issue
  • Kazutoshi OKA, Masayuki MUKUNOKI
    2019 Volume 48 Issue 4 Pages 488-496
    Published: 2019
    Released on J-STAGE: December 20, 2022
    JOURNAL FREE ACCESS

    In this paper, we propose a method of “3D-Super Resolution Generative Adversarial Networks” (3D-SRGAN), which can generate a higher resolution 3D voxel model from a lower resolution input 3D voxel model. This kind of technology is called Super Resolution. There are many studies on Super Resolution for images. However, there are few studies on that for 3D models. We extend and apply SRGAN, which is known as an excellent Super Resolution method for images, to 3D voxel models. Through the comparative experiments, we show that 3D-SRGAN can generate better higher resolution 3D voxel models than simple 3D-voxel scaling. We also show that a 3D-SRGAN trained by a class can generate higher resolution 3D voxel models of other classes.

    Download PDF (5972K)
  • Motohiro MAKIGUCHI, Hideaki TAKADA, Taiki FUKIAGE, Shin’ya NISHIDA
    2019 Volume 48 Issue 4 Pages 497-505
    Published: 2019
    Released on J-STAGE: December 20, 2022
    JOURNAL FREE ACCESS

    Glass-less three-dimensional(3D) display technology reproduces objects with a high presence. The multi-view 3D display method that projects multiple viewpoint images allows to present 3D image to multiple observers, so it can be applied in various applications. A huge number of viewpoints is required to achieve smooth motion parallax in this method. We use a visual perception mechanism called Linear Blending as an approach to reduce the viewpoints. The observer perceptually interpolates the intermediate viewpoint image by blending adjacent viewpoint images in linear blending, and the number of viewpoints is reduced. On the other hand, image quality reduction occurs at the intermediate viewpoint where adjacent viewpoint images are overlapped, so image quality variation with motion parallax becomes a problem. In this paper, we propose a reducing the image quality variation method that utilizes the principle of Hidden Stereo to generate images so that no double edge will be generated when images of two adjacent viewpoints are overlapped. The viewpoint images assuming a viewpoint movement of 360 degrees around the object, and the image quality evaluation value shows the effect of alleviating image quality variation with motion parallax.

    Download PDF (3731K)
  • Masato NAKADA, Issei FUJISHIRO
    2019 Volume 48 Issue 4 Pages 506-515
    Published: 2019
    Released on J-STAGE: December 20, 2022
    JOURNAL FREE ACCESS

    Human hands are particularly eye-catching parts from the first-person view, and realistic hand motion has been required in computer graphics (CG). An actual human hand consists of volumetric bones and various organs, such as tendons, muscles, and veins. However, a natural change in the appearance of the hand’s surface through the motion of its internal structures and an expression of the dynamism through the change can hardly be realized by most conventional CG models for an articulated human body because they just ‘skin’ the bones of a conceptual skeleton with a surface mesh. In this paper, a human hand model, called ‘Fast Implicit model with Semi-anatomical sTructures’ or ‘FIST’, is proposed to model the natural change in the appearance of the hand’s surface interactively. It can be modeled plausibly and efficiently by semi-anatomical modeling, where its bones are modeled anatomically while its tendons, muscles, skin, and veins are modeled artificially. Each of the organs is expressed with its own scalar function, and implicit modeling makes the change in the appearance of the hand’s surface reflected by the motions of these internal structures.

    Download PDF (5938K)
  • Yuki MORIMOTO, Yosuke KOBAYASHI, Tokiichiro TAKAHASHI
    2019 Volume 48 Issue 4 Pages 516-520
    Published: 2019
    Released on J-STAGE: December 20, 2022
    JOURNAL FREE ACCESS

    We propose the system to generate 2.5D animation with an user interface. 2.5D animation is attracting attention as a method of generating 3D-like animation by animating 2D character images. However, it takes a lot of time to edit input data for generating 2.5D animation. In our system, the user inputs some whole body images and then, adjusts joint positions by GUI to generate animation from existing motion data resources. Also we aim to reduce the cost for editing. For example, we apply the template matching to search joint positions, and so on. Our system generates better quality results than results of the previous method of 2.5D animation generation while reducing the costs in 2.5D animation generation through image processing and user interface.

    Download PDF (2466K)
  • Yuki IKEDA, Yuki IGARASHI
    2019 Volume 48 Issue 4 Pages 521-525
    Published: 2019
    Released on J-STAGE: December 20, 2022
    JOURNAL FREE ACCESS

    For handicraft works, production kits are sold that even beginners can make it easily. But even a simple shaped pouch, it is difficult to design a pattern corresponding to the idealized finished size. In this paper, we propose a system that supports making pouches from the design of what users want to make to actually sewing it. The user selects one from three designs and inputs the size. The system calculates the appropriate pattern from each input of numerical value and supports procurement of necessary materials and cutting of parts. Also, by displaying the manufacturing procedure, the system supports the completion of the pouch. You can also design the pattern of cloth you want to use and combine it with a paper making support system or procedure display system. By displaying the manufacturing prosses using the pattern of the designed cloth, more concreate and motivated production support becomes possible.

    Download PDF (1737K)
feedback
Top