Hyper-Column Model (HCM) is a self-organized, competitive and hierarchical multilayer neural network. It is derived from the Neocognitron by replacing each S cell and C cell with a two layer Hierarchical Self-Organizing Map (HSOM). HCM can recognize images with variant object size, position, orientation and spatial resolution. In this paper, we propose two new learning methods; “Combinatorial Learning, ” and “Associative Learning”. The former enables HCM to learn a pattern of winner neurons which are activated in each HSOM with excitatory lateral connections. HCM is expanded to a supervised learnable model by the latter learning algorithm.
Unlike ordinary reinforcement learning (RL) for a single task, RL for a family of tasks is desired in time-varying environments, multi-criteria problems, and inverse RL. In the present paper, a family of tasks is defined as weighted sum of partial rewards, and a parallel learning method is proposed for this family. Expected reward of the optimal policy is not linear in this case; it is a piecewise-linear convex function of weight values. Calculation of convex hulls and Minkowski sums realizes parallel Q-learning for all possible weight values at once, in spite of their infinite variations.