Abstract
Insects perform sophisticated visual-based behaviors and adapt to the real world with their simple brains. Understanding of the mechanisms behind these behaviors is expected to help us to develop autonomous robots. Obstacle detection and collision avoidance are fundamental functions for autonomous robots, and several models have been proposed based on the results of biological experiments in insect visual-motor systems. However, how insects cope with multiple moving objects during locomotion are not considered in these experiments and models. In this study, we developed a simple model based on recent findings of motion detection in crickets in which visual angle, visual angular velocity, and moving direction of objects are important cues to detect moving objects even during locomotion. We implemented these visual cues into the model for obstacle detection and collision avoidance, and evaluated its capability to cope with the real environments employing a small mobile robot equipped with insect-mimicked vision.