自然言語処理
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
一般論文(査読有)
DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Aru MaekawaSatoshi KosugiKotaro FunakoshiManabu Okumura
著者情報
ジャーナル フリー

2025 年 32 巻 1 号 p. 252-282

詳細
抄録

Dataset distillation aims to compress a training dataset by creating a few informative synthetic samples such that the neural networks trained on them perform as best as those trained on the original training dataset. Current text dataset distillation methods create each synthetic sample as a sequence of word embeddings instead of text data to apply gradient-based optimization; however, such embedding-level distilled datasets cannot be used for training other models whose word embedding weights are different from the model used for distillation. To address this issue, we propose a novel text dataset distillation approach, called distilling dataset into language model (DiLM), which trains a language model to generate informative synthetic training samples as text data, rather than directly optimizing synthetic samples. We evaluated DiLM on various text classification datasets and showed that the distilled synthetic datasets from DiLM outperformed those from the current coreset selection methods. DiLM achieved remarkable generalization performance in training different types of models and in the in-context learning of large language models. Our code is available at https://github.com/arumaekawa/DiLM.

著者関連情報
© 2025 The Association for Natural Language Processing
前の記事 次の記事
feedback
Top