The Brain & Neural Networks
Online ISSN : 1883-0455
Print ISSN : 1340-766X
ISSN-L : 1340-766X
Volume 13, Issue 4
Displaying 1-9 of 9 articles from this issue
  • Atsushi Shimada, Naoyuki Tsuruta, Rin-ichiro Taniguchi
    2006Volume 13Issue 4 Pages 129-136
    Published: December 05, 2006
    Released on J-STAGE: March 28, 2011
    JOURNAL FREE ACCESS
    Hyper-Column Model (HCM) is a self-organized, competitive and hierarchical multilayer neural network. It is derived from the Neocognitron by replacing each S cell and C cell with a two layer Hierarchical Self-Organizing Map (HSOM). HCM can recognize images with variant object size, position, orientation and spatial resolution. In this paper, we propose two new learning methods; “Combinatorial Learning, ” and “Associative Learning”. The former enables HCM to learn a pattern of winner neurons which are activated in each HSOM with excitatory lateral connections. HCM is expanded to a supervised learnable model by the latter learning algorithm.
    Download PDF (430K)
  • Kazuyuki Hiraoka, Taketoshi Mishima
    2006Volume 13Issue 4 Pages 137-145
    Published: December 05, 2006
    Released on J-STAGE: March 28, 2011
    JOURNAL FREE ACCESS
    Unlike ordinary reinforcement learning (RL) for a single task, RL for a family of tasks is desired in time-varying environments, multi-criteria problems, and inverse RL. In the present paper, a family of tasks is defined as weighted sum of partial rewards, and a parallel learning method is proposed for this family. Expected reward of the optimal policy is not linear in this case; it is a piecewise-linear convex function of weight values. Calculation of convex hulls and Minkowski sums realizes parallel Q-learning for all possible weight values at once, in spite of their infinite variations.
    Download PDF (529K)
feedback
Top