IEICE Transactions on Communications
Online ISSN : 1745-1345
Print ISSN : 0916-8516

This article has now been updated. Please use the final version.

Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
Zixiao ZHANGFujun HEEiji OKI
Author information
JOURNAL RESTRICTED ACCESS Advance online publication

Article ID: 2022EBP3160

Details
Abstract

This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.

Content from these authors
© 2023 The Institute of Electronics, Information and Communication Engineers
feedback
Top