IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<Speech and Image Processing, Recognition>
3D Reconstruction Based on Style Transfer Data Augmentation
Taiki SaruwatariKatsufumi InoueMichifumi Yoshioka
Author information
JOURNAL RESTRICTED ACCESS

2020 Volume 140 Issue 11 Pages 1198-1206

Details
Abstract

Recently, various technologies using PointCloud and Deep Neural Network (DNN) have been actively researched. However, there is a disadvantage that the collecting PointCloud data from real object with special sensors such as depth sensor is time consuming task. To deal with this problem, we focus on 3D reconstruction from a single image. Conventional methods construct PointCloud from a single image which includes mask information. Therefore, it is difficult to construct a PointCloud from an image without mask information. To remove the requirement of the additional information such as mask for input image, we propose data augmentation based on style transfer for 3D reconstruction. It is known that DNN using style transformed image can learn a shape feature. By using the transformed images, the DNN can learn object shapes with various backgrounds and textures and can obtain shape features even from the images with cluttered background. From the experimental results, we confirmed that our proposed method could construct 3D object shape with PointCloud while keeping shape information without additional information.

Content from these authors
© 2020 by the Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top