2020 Volume 49 Issue 4 Pages 326-331
In this study, we construct a deep learning model that generates realistic images and animations of plants from simple point inputs that specify the contents of images. In conventional image generation by deep learning, a rough input may be difficult because the input and output images need to correspond one-to-one for each pixel. In addition, a large amount of input data is required to generate an animation. On the other hand, by extracting and manipulating attributes of the image for continuously changing an image, it is difficult to obtain a high-quality plant animation in which details are clearly expressed. In this study, we construct a two-stage deep learning model using point labels as input. As a result, high-quality images and animations that plants smoothly change can be generated from a small amount of learning data. Our quantitative evaluation showed that high-quality images were obtained that were clearer and less biased in appearance attributes such as leaf arrangement and the size of the plant.