2020 年 2020 巻 AGI-015 号 p. 07-
Visualizing deep neural networks (DNN) provides an intuitive explanation for thelearned internal representation, while its evaluation is difficult. We believe that a DNN 's learningrepresentation should be evaluated by its consistency with concepts owned by human. In this study,we represent such a concepts as symbolic binary representations and distributions with variance,and investigated the consistency of a specific neuroscientific concept (P300) with the representationslearned from EEG data obtained in a P300 speller experiment. As a result, we found that theconsistency between the concept and the representation is related to the discrimination accuracyof the DNN.