In this paper, we propose a classification method that deploys hard-to-explain rules and is robust against adversarial example (AE) attacks on the MNIST. The purpose of this paper is to solve the technical difficulties of the deep learning-based technology that include the unexplainability of the classification and the vulnerability against the AE attack. The method proposed in this paper is similar to the existing method (DkNN) in terms of using output vectors computed from an artificial neural network (ANN), thus can solve the unexplainability difficulty. The proposed method is different from the DkNN in terms of the architecture of used ANNs and the format of output vectors. Those output vectors are discrete and used as hard-to-explain rules that mitigate the vulnerability against the AE attack. In computational experiments, the MNIST is taken as the target problem, then FGSM and BIM are used as the AE attacks. Computational results display that the proposed method achieved accuracies over 95% for all attacks.
J-STAGEがリニューアルされました! https://www.jstage.jst.go.jp/browse/-char/ja/