IEICE Transactions on Communications
Online ISSN : 1745-1345
Print ISSN : 0916-8516

この記事には本公開記事があります。本公開記事を参照してください。
引用する場合も本公開記事を引用してください。

A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading
Cheng ZHANGZhi LIUBo GUKyoko YAMORIYoshiaki TANAKA
著者情報
ジャーナル 認証あり 早期公開

論文ID: 2017CQP0014

この記事には本公開記事があります。
詳細
抄録

With the rapid increase in demand for mobile data, mobile network operators are trying to expand wireless network capacity by deploying wireless local area network (LAN) hotspots on to which they can offload their mobile traffic. However, these network-centric methods usually do not fulfill the interests of mobile users (MUs). Taking into consideration many issues such as different applications' deadlines, monetary cost and energy consumption, how the MU decides whether to offload their traffic to a complementary wireless LAN is an important issue. Previous studies assume the MU's mobility pattern is known in advance, which is not always true. In this paper, we study the MU's policy to minimize his monetary cost and energy consumption without known MU mobility pattern. We propose to use a kind of reinforcement learning technique called deep Q-network (DQN) for MU to learn the optimal offloading policy from past experiences. In the proposed DQN based offloading algorithm, MU's mobility pattern is no longer needed. Furthermore, MU's state of remaining data is directly fed into the convolution neural network in DQN without discretization. Therefore, not only does the discretization error present in previous work disappear, but also it makes the proposed algorithm has the ability to generalize the past experiences, which is especially effective when the number of states is large. Extensive simulations are conducted to validate our proposed offloading algorithms.

著者関連情報
© 2018 The Institute of Electronics, Information and Communication Engineers
feedback
Top