In reinforcement learning, it takes long time to learn purposive behaviors due to the nature of delayed reward. Multiple reward functions are often introduced in order to accelerate the learning speed of obtaining complicated behaviors. However, the methods of the weighted sum of reward functions often cause the undesirable side effects because the objective functions are different from the original one. In this paper, we propose a novel hierarchical reinforcement learning method for utilizing multiple reward functions. The value function of the upper layer is estimated using the reward for accomplishing the entire task and the supplementary reward calculated from the value functions of the lower layers. The proposed method was applied to a simplified arm movement problems, and outperformed conventional methods.