In recent years, genetic algorithm and machine learning techniques have been developed for image classification. While many techniques have contributed to achieve better performance on various tasks, their models are blackbox and their interpretations are effortful. On the other hand, for some application it is important to make it clear why and how they work. Although there are certainly needs for understandability of classifier, the research on this topic are not adequate enough. We previously proposed a method for generating simple natural language descriptions from decision trees and decision networks using the if-then rules. However, some features are hard to understand and analysis of classification tends to be difficult. In this paper, we introduce a visualization technique which displays the feature distribution to provide us insight into image classifications. It allows us to gain a better understanding of classifiers and intuitive interpretations. We trained the image classifiers on several benchmarks and generated visualizations. We found the visualizations obtained intuitive and our method is efficient.
View full abstract