抄録
We investigate an on-line learning of two-layered feed-forward neural networks with randomly diluted connections by using a gradient descent algorithm. We derive coupled first-order differential equations for order parameters which describe learning process in the thermodynamic limit by assuming the self-averaging within the framework of the statistical physics. We clarify that the learning time for asymmetric dilution of connections in a teacher network is shorter than the learning time for symmetric dilution of the connections in the teacher network. But we find that the learning does not converge when the teacher network is too diluted. We obtain a phase diagram in a learning rate versus rate of dilution plane for the diluted teacher network. It turns out that the learning converges imperfectly when a student network is diluted too much, because the average value of connections for the student network is essentially smaller than the value of connection for the teacher network.