Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 3Q5-IS-2b-03
Conference information

Generative Image Synthesis as a Substitute for Real Images in Pre-training of Vision Transformers
*Luiz Henrique MORMILLEIskandar SALAMAMasayasu ATSUMI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Gathering data from the real world involves time-consuming aspects of web scraping, data cleaning, and labelling. Aiming to alleviate these costly tasks, this paper proposes the utilization of rapid stable diffusion to synthesize images efficiently from text prompts, thereby eliminating the need for manual data collection and mitigating biases and mislabelling risks. Through extensive experimentation with a small-scale vision transformer across 4 downstream classification tasks, our study includes a comprehensive comparison of models pre-trained on conventional datasets, datasets enriched with synthetic images, and entirely synthetic datasets. The outcomes underscore the remarkable efficacy of stable diffusion-synthesized images to yield consistent model generalization and accuracy. Beyond the immediate benefits of fast dataset creation, our approach represents a robust solution for bolstering the performance of computer vision models. The findings underscore the transformative potential of generative image synthesis, offering a new paradigm for advancing the capabilities of machine learning in the realm of computer vision.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top