Abstract
An autonomous agent has a ranged view using the absolute coordinate system, where it can receive accurate information in the range but noting out of the range. This is a considerably artificial situation. In this paper, we propose a staged view in distance and direction using the relative coordinate system, where an agent receives accurate information in the neighborhood but only rough symbolized distance and direction and rough distinction of other agents in short and middle-distance areas. It reflects a human's view that it can see easily an object in the neighborhood but more difficultly in the longer distance and easily in the center direction but more difficultly in the righter and lefter directions. We show by a numerical experiment for the pursuit problem, a multi-agent's benchmark problem, that the agent with the staged view learns effectively using Q-learning.