ITE Transactions on Media Technology and Applications
Online ISSN : 2186-7364
ISSN-L : 2186-7364
Regular Section
[Paper] Phased Data Augmentation for Training a Likelihood-Based Generative Model with Limited Data
Yuta Mimura
著者情報
ジャーナル フリー

2025 年 13 巻 1 号 p. 126-135

詳細
抄録

Generative models excel in creating realistic images, yet their dependency on extensive datasets for training presents significant challenges, especially in domains where data collection is costly or challenging. Current data-efficient methods largely focus on Generative Adversarial Network (GAN) architectures, leaving a gap in training other types of generative models. Our study introduces “phased data augmentation” as a novel technique that addresses this gap by optimizing training in limited data scenarios without altering the inherent data distribution. By limiting the augmentation intensity throughout the learning phases, our method enhances the model's ability to learn from limited data, thus maintaining fidelity. Applied to a model integrating PixelCNNs with Vector Quantized Variational AutoEncoder 2 (VQ-VAE-2), our approach demonstrates superior performance in both quantitative and qualitative evaluations across diverse datasets. This represents an important step forward in the efficient training of likelihood-based models, extending the usefulness of data augmentation techniques beyond just GANs.

著者関連情報
© 2025 The Institute of Image Information and Television Engineers
前の記事 次の記事
feedback
Top