人工知能学会全国大会論文集
Online ISSN : 2758-7347
38th (2024)
セッションID: 3Q5-IS-2b-03
会議情報

Generative Image Synthesis as a Substitute for Real Images in Pre-training of Vision Transformers
*Luiz Henrique MORMILLEIskandar SALAMAMasayasu ATSUMI
著者情報
会議録・要旨集 フリー

詳細
抄録

Gathering data from the real world involves time-consuming aspects of web scraping, data cleaning, and labelling. Aiming to alleviate these costly tasks, this paper proposes the utilization of rapid stable diffusion to synthesize images efficiently from text prompts, thereby eliminating the need for manual data collection and mitigating biases and mislabelling risks. Through extensive experimentation with a small-scale vision transformer across 4 downstream classification tasks, our study includes a comprehensive comparison of models pre-trained on conventional datasets, datasets enriched with synthetic images, and entirely synthetic datasets. The outcomes underscore the remarkable efficacy of stable diffusion-synthesized images to yield consistent model generalization and accuracy. Beyond the immediate benefits of fast dataset creation, our approach represents a robust solution for bolstering the performance of computer vision models. The findings underscore the transformative potential of generative image synthesis, offering a new paradigm for advancing the capabilities of machine learning in the realm of computer vision.

著者関連情報
© 2024 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top