2020 年 8 巻 1 号 p. 35-43
In this paper, we are tackling the problem of the entire 360-degree image generation process by viewing one specific aspect of it. We believe that there are two key points for achieving this 360-degree image completion: firstly, understanding the scene context; for instance, if a road is in front, the road continues on the back side; and secondly, the treatment of the area-wise information bias; for instance, the upper and lower areas are sparse due to the distortion of equirectangular projection, and the center is dense since it is less affected by it. Although the context of the whole image can be understood through using dilated convolutions in a series, such as recent conditional Generative Adversarial Networks (cGAN)-based inpainting methods, these methods cannot simultaneously deal with detailed information. Therefore, we propose a novel generator network with multi-scale dilated convolutions for the area-wise information bias on one 360-degree image and a self-attention block for improving the texture quality. Several experiments show that the proposed generator can better capture the properties of a 360-degree image and that it has the effective architecture for 360-degree image completion.