2025 Volume 32 Issue 1 Pages 252-282
Dataset distillation aims to compress a training dataset by creating a few informative synthetic samples such that the neural networks trained on them perform as best as those trained on the original training dataset. Current text dataset distillation methods create each synthetic sample as a sequence of word embeddings instead of text data to apply gradient-based optimization; however, such embedding-level distilled datasets cannot be used for training other models whose word embedding weights are different from the model used for distillation. To address this issue, we propose a novel text dataset distillation approach, called distilling dataset into language model (DiLM), which trains a language model to generate informative synthetic training samples as text data, rather than directly optimizing synthetic samples. We evaluated DiLM on various text classification datasets and showed that the distilled synthetic datasets from DiLM outperformed those from the current coreset selection methods. DiLM achieved remarkable generalization performance in training different types of models and in the in-context learning of large language models. Our code is available at https://github.com/arumaekawa/DiLM.