Paper

Graph Reinforcement Learning for Radio Resource Allocation

Deep reinforcement learning (DRL) for resource allocation has been investigated extensively owing to its ability of handling model-free and end-to-end problems. Yet the high training complexity of DRL hinders its practical use in dynamic wireless systems. To reduce the training cost, we resort to graph reinforcement learning for exploiting two kinds of relational priors inherent in many problems in wireless communications: topology information and permutation properties. To design graph reinforcement learning framework systematically for harnessing the two priors, we first conceive a method to transform state matrix into state graph, and then propose a general method for graph neural networks to satisfy desirable permutation properties. To demonstrate how to apply the proposed methods, we take deep deterministic policy gradient (DDPG) as an example for optimizing two representative resource allocation problems. One is predictive power allocation that minimizes the energy consumed for ensuring the quality-ofservice of each user that requests video streaming. The other is link scheduling that maximizes the sum-rate for device-to-device communications. Simulation results show that the graph DDPG algorithm converges much faster and needs much lower space complexity than existing DDPG algorithms to achieve the same learning performance.

Results in Papers With Code
(↓ scroll down to see all results)