1999 年 14 巻 2 号 p. 296-306
An improved way to select a feature subset for decision tree induction is presented. The method is based on a heuristic that is defined by means of explanation paths of the decision tree. Although the method emp1oys a control structure that is similar to the wrapper model, it is much more eflicient because the heuristic that controls the search requires to call an induction algorithm much less times than the wrapper model. Experiments prove that the algorithm is as effective as the wrapper model, not only to simplify decision trees but also to improve generalization ability. L-usefulness, a newly defined property of a feature and theoretical background of the method, is also discussed.