Host: The Japanese Society for Artificial Intelligence
Name : The 35th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 35
Location : [in Japanese]
Date : June 08, 2021 - June 11, 2021
Recently, deep learning has made significant progress in the field of machine learning and has achieved excellent performance in various fields. Deep neural networks (DNNs), the foundation of deep learning, can acquire feature representations in hidden layers by learning input data. However, the explainability or interpretability of deep learning models is unclear, making it difficult to understand which part of the network structure is responsible for the performance. To improve this problem, we propose a method for constructing DNNs by using the kernels of pretrained support vector machines (SVMs). Setting up a hidden layer that behaves similarly to the SVM-kernels in the DNNs, we aim to improve the interpretability of the networks. The proposed method has an advantage that the weights of the networks, which are conventionally initialized randomly, can be set efficiently.