Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
32nd (2018)
Session ID : 2A2-02
Conference information

Deep few-shot learning with pseudo example optimization
*Akisato KIMURAZoubin GHAHRAMANIKoh TAKEUCHITomoharu IWATANaonori UEDA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

This paper proposes a novel method for training neural networks with a limited amount of training data. Our approach is based on knowledge distillation that transfers knowledge from a deep reference neural network to a shallow target one. The proposed method employs this idea to mimic predictions of non-neural networks reference models that are more robust against overfitting that the target neural network. Different from almost all the previous work for knowledge distillation that requires a large amount of labeled training data, the proposed method requires only a small amount of training data. Instead, we introduce pseudo training data that is optimized as a part of model parameters.

Content from these authors
© 2018 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top