Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 2G6-OS-21f-05
Conference information

Scaling Laws of Dataset Size for VideoGPT
*Masahiro NEGISHIMakoto SATORyosuke UNNOKoudai TABATATaiju WATANABEJunnosuke KAMOHARATaiga KUMERyo OKADAYusuke IWASAWAYutaka MATSUO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Over the past decade, deep learning has made significant strides in improving various domains by training large models with large-scale computational resources. Recent studies showed that large-scale transformer models perform well in diverse generative tasks, including language modeling and image modeling. Efficient training of such large-scale models requires a vast amount of data, and many fields are working on building large-scale datasets. However, despite the development in simulator environments such as CARLA and large-scale datasets such as RoboNet, the scaling to dataset size of the performance of world models, which try to acquire the temporal and spatial structure of environments, has yet to be sufficiently studied. Thus, this work experimentally proves the scaling law of a world model to dataset size. We use VideoGPT and a dataset generated by the CARLA simulator. We also show that the computational budget should mainly be used to scale up dataset size when the number of model parameters is on the order of 107 or larger and the computational budget is limited.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top