IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Special Section on Intelligent Information Processing Technology to be Integrated into Society
A Multi-Agent Deep Reinforcement Learning Algorithm for Task Offloading in Future 6G V2X Network
Jiakun LIJiajian LIYanjun SHIHui LIANHaifan WU
Author information
JOURNAL FREE ACCESS

2025 Volume E108.D Issue 7 Pages 697-708

Details
Abstract

In future 6G Vehicle-to-Everything (V2X) Network, task offloading of mobile edge computing (MEC) systems will face complex challenges in high mobility, dynamic environment. We herein propose a Multi-Agent Deep Reinforcement Learning algorithm (MADRL) with cloud-edge-vehicle collaborations to address these challenges. Firstly, we build the model of the task offloading problem in the cloud-edge-vehicle system, which meets low-latency, low-energy computing requirements by coordinating the computational resources of connected vehicles and MEC servers. Then, we reformulate this problem as a Markov Decision Process and propose a digital twin-assisted MADRL algorithm to tackle it. This algorithm tackles the problem by treating each connected vehicle as a agent, where the observations of agents are defined as the current local environmental state and global digital twin information. The action space of agents comprises discrete task offloading targets and continuous resource allocation. The objective of this algorithm is to improve overall system performance, taking into account collaborative learning among the agents. Experimental results show that the MADRL algorithm performed well in computational efficiency and energy consumption compared with other strategies.

Content from these authors
© 2025 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top