2024 Volume 15 Issue 2 Pages 421-431
Reservoir computing-based control is drawing focus in the field of computational intelligence. Despite the efficacy of reinforcement learning for autonomous capability enhancement, the computational overhead associated with extensive trial-and-error remains a considerable drawback. To address this issue, we introduce a novel mental simulation model underpinned by reservoir computing. This model employs an internal environmental representation to facilitate direct action sequence optimization. We rigorously evaluate the proposed framework across classic control tasks as well as a specialized application scenario under three distinct conditions: fully-observable, partially-observable, and visual-observation conditions. Our findings indicate that the proposed model outperforms prior methods in action planning.