2020 年 140 巻 6 号 p. 664-672
Deep Neural Network (DNN) models have a great deal of parameters. It allows DNN to obtain good performance, however it also causes some problems. The first one is that learning of huge parameters requires enormous learning data for training DNN. The second one is that high-spec devices are requested because learning of huge parameters is computational complexity. These problems prevent the installation of DNN for any real tasks. To solve these problems, we propose a new learning method of DNN by combining transfer learning and knowledge distillation. The characteristic point of our proposed method is that we learn the DNN parameters by applying the techniques mentioned above simultaneously, i.e., we transfer the feature map of teacher DNN to student DNN, which is smaller than teacher DNN.
J-STAGEがリニューアルされました! https://www.jstage.jst.go.jp/browse/-char/ja/