2024 年 15 巻 3 号 p. 577-587
Image inpainting aims to recover or reconstruct specific regions in given images, and it is also used to remove unwanted objects in the images. Existing image inpainting methods can be classified into patch-based methods and machine learning-based methods. The former suffers from a lack of global consistency and diversity in the restored region, while the latter is challenged by its dependence on the dataset domain used for training. On the other hand, single image generative adversarial networks (SinGAN) has been proposed to generate a variety of images from a given single image, which is a dataset-independent method because it does not require prior training. However, SinGAN cannot be applied to image inpainting because it requires defect-free images for training. In this study, we propose a globally consistent, dataset-independent image inpainting method with diverse outputs. Our proposed method generalizes the architecture of SinGAN to be used for image inpainting by introducing partial convolution and region normalization to train the generative neural network from a single image containing unwanted or missing regions. To confirm the effectiveness of the proposed method, we quantitatively and qualitatively compared its image inpainting performance with existing patch-based methods. Experimental results show that the proposed method can perform image inpainting with global consistency and diversity while the restoration performance is comparable to that of existing methods.