Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
32nd (2018)
Session ID : 4M1-04
Conference information

Image Modality Translation for Enriching Virtual Space
Shinta MASUDATakashi MACHIDA*Takashi MATSUBARAKuniaki UEHARA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Following great successes of machine learning in various benchmarks, its practical use is attracting attention. The machine learning system has to be trained using a wide variety of data samples and to be tested under various conditions, but collecting numerous data samples is very costly. Here, a demand for data augmentation arises. In this paper, we tackle the augmentation of real images by translating their modality to another modality such as daytime vs. night-time. This data augmentation enables us to train and test the machine learning system in various modality. We first demonstrate that existing approaches, pix2pix and cycle-GAN have some difficulties of applying data augmentation; pix2pix requires paired samples in both modalities or cannot overcome the difference in the modalities, and cycle-GAN sometimes fails in keeping consistency in both modalities. We propose modifications of these methods, which improve the consistency in image modality translation.

Content from these authors
© 2018 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top