Abstract
An attempt was made to select a statistical model for a neural network were used for selection of trees for thinning. AIC, MDLP, and FPE were selected as models with higher generalization ability, and a three-layer back-propagation model was used as the learning algorithm. This was applied to a two-output pattern, whose output was "thinned" or "unthinned", as well as various input and hidden patterns, a number of learning data and some learning conditions. It was found that: (1) Many of the models showed good results when the number of hidden units was one or two degrees. (2) When the number of hidden units was one, the number of error patterns recognized in learning was zero, from the view point of generalization, and the results were the same as those when the number of hidden units was two. (3) Irrespective of whether the number of output units was one or two, good generalization was shown even if the learning data accounted for about 20% of all data.