Abstract
This paper presents a learning architecture with an evolutionary learning algorithm of collective behaviors for a group of mobile robots. The base learning algorithm is a distributable genetic algorithm with a redundant coding (RGA). The application of standard GA to distributed robot systems faces three drawbacks that (1) GAs do not consider non-stationary environments, (2) GAs are centralized algorithms, and (3) GAs require the synchronization for generation replacements. The proposed architecture enables GA to search a wide range of solutions by augmenting the diversity of genetic information so that the optimum solutions varying over time can be followed by GA. The analytic results about the reason why RGA outperformes standard GA in non-stationary environments are presented. RGA is converted into a distributed and asynchronous evolutionary algorithm (DRGA). The simulation results of a problem to learn collective strategies to capture fleeing targets verify the validity of the proposed algorithm and the learned strategies are shown to be reasonable ones.