1999 年 35 巻 4 号 p. 560-567
The progressive evolution method is a promising way to accelerate learning by making the problem size small in genetic learning. This method leads the population to the final goal by giving step-by-step learning targets, called subgoals. Giving subgoals divides a large search space into a small one, thereby accelerating the evolution. Previous implementation of progressive learning is, however, not practical because subgoals must be given by hand for each specific problem. This paper proposes a progressive evolution method that allows the population to autonomously get subgoals. The primary feature of our proposal is to concurrently perform both global search for the final goal and local search for the subgoal to enable the population to get subgoals. The global search moves the subgoal to the final goal. On the other hand, the local search leads the population to the current subgoal, and also moves the subgoal close to the final goal more quickly. For the subgoal to move faster to the final goal, we use the following fitness function. The function defines the neighborhood of the current subgoal, and gives a higher fitness value to individuals within the neighborhood as they exist further from the subgoal. This allows individuals that are closer to the final goal than the subgoal to survive. Moreover, the neighborhood is determined by the degree of achievement in the search. Specifically, we let the neighborhood be smaller as the subgoal approaches more closely to the final goal. This improves the efficiency of the search. We confirm that the population can more quickly evolve with our method by an experiment that generates the control circuit of the artificial ant than with conventional methods.