|Table of Contents|

Power Control in Ultra Dense Network:A DeepReinforcement Learning Based Method(PDF)

南京师范大学学报(工程技术版)[ISSN:1006-6977/CN:61-1281/TN]

Issue:
2022年01期
Page:
16-23
Research Field:
机器学习
Publishing date:

Info

Title:
Power Control in Ultra Dense Network:A DeepReinforcement Learning Based Method
Author(s):
Mao Jin12Xiong Ke12Wei Ning34Zhang Yu5Zhang Ruichen12
(1.School of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044,China)(2.Beijing Key Laboratory of Traffic Data Analysis and Mining,Beijing 100044,China)(3.ZTE Corporation,Shenzhen 518057,China)(4.State Key Laboratory of Mobile Network and Mobile Multimedia Technology,Shenzhen 518055,China)(5.State Grid Energy Research Institute Co.,Ltd.,Beijing 102209,China)
Keywords:
ultra-dense networkspower controlinformation capacityQoSdeep reinforcement learning
PACS:
TP391
DOI:
10.3969/j.issn.1672-1292.2022.01.003
Abstract:
For ultra-dense networks,in view of the problem of low spectrum utilization due to excessive users and large interference,an optimization problem is formulated to increase the system information capacity and satisfy the number of users with the quality of service(QoS)by optimizing the transmission power. Since the problem is non convex and the power control is a discrete variable,it is modeled as a Markov decision policy process. To this end,a power control algorithm based on deep reinforcement learning is proposed,and the corresponding action space,state space and reward function are designed. Simulation results show that compared with the maximum transmit power strategy and random transmit power strategy,the proposed algorithm improves the information capacity by at least 15.9% and the satisfaction of users’ QoS by at least 10.7%. Moreover,compared with the algorithm without considering the improvement of user’s QoS,the proposed algorithm improves the user’s QoS by appropriately reducing the information capacity.

References:

[1] 新天域互联. 全球网络流量分析市场将以23.05%CAGR成长[Z/OL]. [2020-06-23]https://www.sohu.com/a/403637502_100161396.
[2]OLWAL T O,DJOUANI K,KURIEN A M. A survey of resource management toward 5G radio access networks[J]. IEEE Communications Surveys and Tutorials,2016,18(3):1656-1686.
[3]NAVARRO-ORTIZ J,ROMERO-DIAZ P,SENDRA S,et al. A survey on 5G usage scenarios and traffic models[J]. IEEE Communications Surveys and Tutorials,2020,22(2):905-929.
[4]KHURPADE J M,RAO D,SANGHAVI P D. A survey on IOT and 5G network[C]//Proceedings of the 2018 International Conference on Smart City and Emerging Technology(ICSCET). Mumbai,India:IEEE,2018:1-3
[5]SHEN K M,YU W. A coordinated uplink scheduling and power control algorithm for multicell networks[C]//Proceedings of the 2015 49th Asilomar Conference on Signals,Systems and Computers. Pacific Grove,USA:IEEE,2015:1305-1309.
[6]ELWEKEIL M,ALGHONIEMY M,MUTA O. Dynamic autonomous frequency reuse for uplink cellular networks[C]//Proceedings of the 2018 IEEE International Conference on Consumer Electronics(ICCE). Las Vegas,USA:IEEE,2018:1-5.
[7]VISALI M,SAKURU K L V S. Power control based resource allocation in LTE uplinks[C]//Proceedings of the 2015 International Conference on Communications and Signal Processing. Melmaruvathur,India:IEEE,2015:0579-0582.
[8]SHEN K M,YU W. Fractional programming for communication systems—Part I:power control and beamforming[J]. IEEE Transactions on Signal Processing,2018,66(10):2616-2630.
[9]NINGOMBAM D D,SHIN S. Radio resource allocation and power control scheme to mitigate interference in device-to-device communications underlaying LTE-A uplink cellular networks[C]//Proceedings of the 2017 International Conference on Information and Communication Technology Convergence(ICTC). Jeju,Korea:IEEE,2017:961-963.
[10]ZEINEDDINE K,HONIG M L,NAGARAJ S. Uplink power allocation for distributed interference cancellation with channel estimation error[J]. IEEE Transactions on Wireless Communications,2016,15(10):6785-6796.
[11]王云,韩伟. 一种基于划分和集成思想的多智能体强化学习[J]. 南京师范大学学报(工程技术版),2008,8(4):59-62.
[12]MENG F,CHEN P,WU L N,et al. Power allocation in multi-user cellular networks:deep reinforcement learning approaches[J]. IEEE Transactions on Wireless Communications,2020,19(10):6255-6267.
[13]GHADIMI E,CALABRESE F D,PETERS G,et al. A Reinforcement learning approach to power control and rate adaptation in cellular networks[C]//Proceedings of the 2017 IEEE International Conference on Communications. Paris,France:IEEE,2016:1-7.
[14]NASIR Y S,GUO D N. Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks[J]. IEEE Journal on Selected Areas in Communications,2019,37(10):2239-2250.
[15]TAN J J,ZHANG L,LIANG Y C. Deep reinforcement learning for channel selection and power control in D2D networks[C]//Proceedings of the 2019 IEEE Global Communications Conference(GLOBECOM). Waikoloa,USA:IEEE,2020:1-6.
[16]ZHANG R C,XIONG K,GUO W,et al. Q-learning-based adaptive power control in wireless RF energy harvesting heterogeneous networks[J]. IEEE Systems Journal,2020,15(2):1861-1872.
[17]DENT P,BOTTOMLEY G E. Jakes fading model revisited[J]. Electronics Letters,1993,29(13):1162-1163.
[18]O’SHEA T,HOYDIS J. An introduction to deep learning for the physical layer[J]. IEEE Transactions on Cognitive Communications and Networking,2017,3(4):563-575.

Memo

Memo:
-
Last Update: 2022-03-15