The Brain & Neural Networks
Online ISSN : 1883-0455
Print ISSN : 1340-766X
ISSN-L : 1340-766X
Volume 1, Issue 2
Displaying 1-7 of 7 articles from this issue
  • Masumi Ishikawa, Hirotsugu Yamamoto
    1994 Volume 1 Issue 2 Pages 57-63
    Published: December 05, 1994
    Released on J-STAGE: December 01, 2010
    JOURNAL FREE ACCESS
    The discovery of explicit rules by back propagation learning of neural networks is extremely hard due to the difficulty in interpreting hidden units of resulting networks. This paper proposes that the structural learning with forgetting can discover rules in the form of Boolean functions. Database on mushrooms is used to demonstrate that the discovery of Boolean functions classifying mushrooms into edible or poisonous is possible. The comparative study of the generalization abilities of back propagation learning, the structural learning with forgetting and ID3 in artificial intelligence is also presented.
    Download PDF (571K)
  • Norihito Majima, Akiko Watanabe, Akira Yoshimura, Takashi Nagano
    1994 Volume 1 Issue 2 Pages 64-68
    Published: December 05, 1994
    Released on J-STAGE: December 01, 2010
    JOURNAL FREE ACCESS
    In this paper we propose a new criterion called “Effectiveness factor” for pruning hidden units. The performance of this criterion was tested on some examples. As for the final number of hidden units obtained, Effectiveness factor was equivalent or superior to the other conventional criterions. Moreover computation time with this criterion is much less than the other criterions. It is, therefore, concluded that Effectiveness factor is superior to the conventional criterions.
    Download PDF (455K)
  • Masahiko Morita
    1994 Volume 1 Issue 2 Pages 69-74
    Published: December 05, 1994
    Released on J-STAGE: December 01, 2010
    JOURNAL FREE ACCESS
    A learning method for nonmonotone analog neural networks is presented by which almost any sequential pattern can be memorized. This method does not require a complex learning rule or particular devices for synchronizing neurons or delay; one only has to change the input pattern gradually and modify the synaptic weights according to a kind of correlation learning rule. Then the state of the network follows a little after the input pattern, its trajectory growing into a dynamic attractor with a few times of repetition. Numerical simulations are performed to examine the learning process and the recollection ability of the model.
    Download PDF (474K)
feedback
Top