1992 年 7 巻 2 号 p. 329-335
Learning of large-scale neural networks suffers from computational burden and the local minima problem. One way of avoiding these difficulties is the adoption of module structured networks. Proposed in this paper is a learning of module structured networks using the structural learning algorithm with forgetting of link weights previously proposed by the author. The essence of the previous proposal is that the forgetting of link weights makes unnecessary links fade out, thus generating a skeletal network structure. The structural learning enables formation of modules. It also enables automatic utilization of the most appropriate module among the previously learned ones for the learning of the present module. This not only achieves efficient learning, but also makes the resulting network understandable due to its modular character. In the learning of a Boolean function, a module corresponding to its subproblem is automatically selected from among the previously learned modules. When there exist more than one modules corresponding to the subproblem, the best learned one is selected. In the parity problem, the structural learning can automatically use the result of the previously learned parity problem modules of the lower order. The transformation of figures can be represented as a sequence of basic transformations. Suppose each module has already learned one of the basic transformations. The structural learning can discover a sequence of modules which realizes the given transformation. These examples demonstrate the effectiveness of the structural learning with forgetting in the learning of module structured networks.