International Journal of Automotive Engineering
Online ISSN : 2185-0992
Print ISSN : 2185-0984
ISSN-L : 2185-0992
Research paper
Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework
Patrick HoffmannKirill GorelikValentin Ivanov
Author information
JOURNAL OPEN ACCESS

2024 Volume 15 Issue 1 Pages 19-26

Details
Abstract

This work provides a study of methods for the automated derivation of control strategies for over-actuated systems. For this purpose, Reinforcement Learning (RL) and Model Predictive Control (MPC) approximating the solution of the Optimal Control Problem (OCP) are compared using the example of an over-actuated vehicle model executing an ISO Double Lane Change (DLC). This exemplary driving maneuver is chosen due to its critical vehicle dynamics for the comparison of algorithms in terms of control performance and possible automation within a design space exploration framework. The algorithms show reasonable control results for the goal of this study, although there are differences in terms of driving stability. While Model Predictive Control first requires the optimization of the trajectory, which should then be optimally tracked, RL may combine both in one step. In addition, manual effort required to adapt the OCP problem to new design variants for solving it with RL and MPC is evaluated and assessed with respect to its automation. As a result of this study, an Actor-Critic Reinforcement Learning method is recommended for the automated derivation of control strategies in the context of a design space exploration.

Content from these authors
© 2024 Society of Automotive Engineers of Japan, Inc

This article is licensed under a Creative Commons [Attribution-NonCommercial-ShareAlike 4.0 International] license.
https://creativecommons.org/licenses/by-nc-sa/4.0/
Previous article Next article
feedback
Top