2020 Volume 19 Issue 2 Pages 92-98
Purpose: A general problem of machine-learning algorithms based on the convolutional neural network (CNN) technique is that the reason for the output judgement is unclear. The purpose of this study was to introduce a strategy that may facilitate better understanding of how and why a specific judgement was made by the algorithm. The strategy is to preprocess the input image data in different ways to highlight the most important aspects of the images for reaching the output judgement.
Materials and Methods: T2-weighted brain image series falling into two age-ranges were used. Classifying each series into one of the two age-ranges was the given task for the CNN model. The images from each series were preprocessed in five different ways to generate five different image sets: (1) subimages from the inner area of the brain, (2) subimages from the periphery of the brain, (3–5) subimages of brain parenchyma, gray matter area, and white matter area, respectively, extracted from the subimages of (2). The CNN model was trained and tested in five different ways using one of these image sets. The network architecture and all the parameters for training and testing remained unchanged.
Results: The judgement accuracy achieved by training was different when the image set used for training was different. Some of the differences was statistically significant. The judgement accuracy decreased significantly when either extra-parenchymal or gray matter area was removed from the periphery of the brain (P < 0.05).
Conclusion: The proposed strategy may help visualize what features of the images were important for the algorithm to reach correct judgement, helping humans to understand how and why a particular judgement was made by a CNN.