The Journal of the Institute of Image Electronics Engineers of Japan
Online ISSN : 1348-0316
Print ISSN : 0285-9831
ISSN-L : 0285-9831
PoPGAN : Points to Plant Translation with Generative Adversarial Network
Yuki YAMASHITAYuki MORIMOTO
Author information
JOURNAL FREE ACCESS

2020 Volume 49 Issue 4 Pages 326-331

Details
Abstract

In this study, we construct a deep learning model that generates realistic images and animations of plants from simple point inputs that specify the contents of images. In conventional image generation by deep learning, a rough input may be difficult because the input and output images need to correspond one-to-one for each pixel. In addition, a large amount of input data is required to generate an animation. On the other hand, by extracting and manipulating attributes of the image for continuously changing an image, it is difficult to obtain a high-quality plant animation in which details are clearly expressed. In this study, we construct a two-stage deep learning model using point labels as input. As a result, high-quality images and animations that plants smoothly change can be generated from a small amount of learning data. Our quantitative evaluation showed that high-quality images were obtained that were clearer and less biased in appearance attributes such as leaf arrangement and the size of the plant.

Content from these authors
© 2020 The Institute of Image Electronics Engineers of Japan
Previous article Next article
feedback
Top