浏览全部资源
扫码关注微信
1. 先进信息网络北京实验室,北京 100876
2. 网络体系构建与融合北京市重点实验室,北京 100876
3. 北京邮电大学信息与通信工程学院,北京 100876
[ "康宇(1999- ),男,北京邮电大学硕士生,主要研究方向为边缘计算、车联网和边缘智能" ]
[ "刘雅琼(1988- ),女,博士,北京邮电大学副教授,主要研究方向为边缘计算、车联网和边缘智能" ]
[ "赵彤雨(1998- ),女,北京邮电大学硕士生,主要研究方向为边缘计算、物联网和边缘智能" ]
[ "寿国础(1965- ),男,博士,北京邮电大学教授,主要研究方向为接入网络与边缘计算、光纤与无线网络虚拟化、网络构建与路由、移动互联网与应用等" ]
网络出版日期:2023-01,
纸质出版日期:2023-01-20
移动端阅览
康宇, 刘雅琼, 赵彤雨, 等. AI算法在车联网通信与计算中的应用综述[J]. 电信科学, 2023,39(1):1-19.
Yu KANG, Yaqiong LIU, Tongyu ZHAO, et al. A survey on AI algorithms applied in communication and computation in Internet of vehicles[J]. Telecommunications science, 2023, 39(1): 1-19.
康宇, 刘雅琼, 赵彤雨, 等. AI算法在车联网通信与计算中的应用综述[J]. 电信科学, 2023,39(1):1-19. DOI: 10.11959/j.issn.1000-0801.2023019.
Yu KANG, Yaqiong LIU, Tongyu ZHAO, et al. A survey on AI algorithms applied in communication and computation in Internet of vehicles[J]. Telecommunications science, 2023, 39(1): 1-19. DOI: 10.11959/j.issn.1000-0801.2023019.
在5G时代,车联网的通信和计算发展受到信息量急速增加的限制。将AI算法应用在车联网,可以实现车联网通信和计算方面的新突破。调研了AI算法在通信安全、通信资源分配、计算资源分配、任务卸载决策、服务器部署、通算融合等方面的应用,分析了目前AI算法在不同场景下所取得的成果和存在的不足,结合车联网发展趋势,讨论了AI算法在车联网应用中的未来研究方向。
In the 5G era
the development of communication and computing in the Internet of vehicles has been limited by the rapidly increasing amount of information.New breakthroughs in communication and computing in Internet of vehicles can be achieved by applying AI algorithms to the Internet of vehicles.Firstly
the application of AI algorithms in communication security
communication resource allocation
computation resource allocation
task offloading decision
server deployment
communication-computation integration were investigated.Secondly
the achievements and shortcomings of the present AI algorithms in different scenarios were analyzed.Finally
combined with the Internet of vehicle development trend
some future research directions for AI algorithms applied in the Internet of vehicles were discussed.
龚媛嘉 , 孙海波 . 车联网系统综述 [J ] . 中国新通信 , 2021 , 23 ( 17 ): 51 - 52 .
GONG Y J , SUN H B . An overview of internet of vehicles systems [J ] . China New Telecommunications , 2021 , 23 ( 17 ): 51 - 52 .
陈山枝 , 葛雨明 , 时岩 . 蜂窝车联网(C-V2X)技术发展、应用及展望 [J ] . 电信科学 , 2022 , 38 ( 1 ): 1 - 12 .
CHEN S Z , GE Y M , SHI Y . Technology development,application and prospect of cellular vehicle-to-everything (C-V2X) [J ] . Telecommunications Science , 2022 , 38 ( 1 ): 1 - 12 .
CHEN S Z , HU J L , SHI Y , et al . A vision of C-V2X:technologies,field testing,and challenges with chinese development [J ] . IEEE Internet of Things Journal , 2020 , 7 ( 5 ): 3872 - 3881 .
LIU G , LI N , DENG J , et al . The SOLIDS 6G mobile network architecture:driving forces,features,and functional topology [J ] . Engineering , 2022 , 8 ( 1 ): 42 - 59 .
孙韶辉 , 戴翠琴 , 徐晖 , 等 . 面向 6G 的星地融合一体化组网研究 [J ] . 重庆邮电大学学报(自然科学版) , 2021 , 33 ( 6 ): 891 - 901 .
SUN S H , DAI C Q , XU H , et al . Survey on satellite-terrestrial integration networking towards 6G [J ] . Journal of Chongqing University of Posts and Telecommunications (Natural Science edition) , 2021 , 33 ( 6 ): 891 - 901 .
LIU G Y , HUANG Y H , LIN , et al . Vision,requirements and network architecture of 6G mobile network beyond 2030 [J ] . IEEE China Communications , 2020 , 17 ( 9 ): 92 - 104 .
LIU Y Q , PENG M G , SHOU G C , et al . Toward edge intelligence:multiaccess edge computing for 5G and internet of things [J ] . IEEE Internet of Things Journal , 2020 , 7 ( 8 ): 6722 - 6747 .
徐堂炜 , 张海璐 , 刘楚 , 等 . 基于强化学习的低时延车联网群密钥分配管理技术 [J ] . 网络与信息安全学报 , 2020 , 6 ( 5 ): 119 - 125 .
XU T W , ZHANG H L , LIU C , et al . Reinforcement learning based group key agreement scheme with reduced latency for VANET [J ] . Chinese Journal of Network and Information Security , 2020 , 6 ( 5 ): 119 - 125 .
WANG D , ZHANG Q , LIU J , et al . A novel QoS-awared grid routing protocol in the sensing layer of Internet of vehicles based on reinforcement learning [J ] . IEEE Access , 2019 ( 7 ): 185730 - 185739 .
PENG Y , LIU L , ZHOU Y , et al . Deep reinforcement learning-based dynamic service migration in vehicular networks [C ] // Proceedings of 2019 IEEE Global Communications Conference (GLOBECOM) . Piscataway:IEEE Press , 2019 : 1 - 6 .
CHOE C , CHOI J , AHN J , et al . Multiple channel access using deep reinforcement learning for congested vehicular networks [C ] // Proceedings of 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring) . Piscataway:IEEE Press , 2020 : 1 - 6 .
ATALLAH R F , ASSI C M , KHABBAZ M J . Scheduling the operation of a connected vehicular network using deep reinforcement learning [J ] . IEEE Transactions on Intelligent Transportation Systems , 2018 , 20 ( 5 ): 1669 - 1682 .
AOKI S , HIGUCHI T , ALTINTAS O . Cooperative perception with deep reinforcement learning for connected vehicles [C ] // Proceedings of 2020 IEEE Intelligent Vehicles Symposium (IV) . Piscataway:IEEE Press , 2020 : 328 - 334 .
俞建业 , 戚湧 , 王宝茁 . 基于 Spark 的车联网分布式组合深度学习入侵检测方法 [J ] . 计算机科学 , 2021 , 48 ( 6A ): 518 - 523 .
YU J Y , QI Y , WANG B Z . Distributed combination deep learning intrusion detection method for internet of vehicles based on Spark [J ] . Computer Science , 2021 , 48 ( 6A ): 518 - 523 .
吴茂强 , 黄旭民 , 康嘉文 . 面向车路协同推断的差分隐私保护方法 [J ] . 计算机工程 , 2022 , 48 ( 7 ): 29 - 35 .
WU M Q , HUANG X M , KANG J W , et al . Differential privacy protection methods for vehicle-road collaborative inference [J ] . Computer Engineering , 2022 , 48 ( 7 ): 29 - 35 .
李明磊 , 章阳 , 康嘉文 , 等 . 基于多智能体强化学习的区块链赋能车联网中的安全数据共享 [J ] . 广东工业大学学报 , 2021 , 38 ( 6 ): 62 - 69 .
LI M L , ZHANG Y , KANG J W , et al . Multi-agent reinforcement learning for secure data sharing in blockchain-empowered vehicular networks [J ] . Journal of Guangdong University of Technology , 2021 , 38 ( 6 ): 62 - 69 .
ZHANG D , YU F R , YANG R , et al . A deep reinforcement learning-based trust management scheme for software-defined vehicular networks [C ] // Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications . New York:ACM Press , 2018 : 1 - 7 .
YOON S , CHO J H , KIM D S , et al . DESOLATER:deep reinforcement learning-based resource allocation and moving target defense deployment framework [J ] . IEEE Access , 2021 ( 9 ): 70700 - 70714 .
ZHOU Y , TANG F , KAWAMOTO Y , et al . Reinforcement learning-based radio resource control in 5G vehicular network [J ] . IEEE Wireless Communications Letters , 2019 , 9 ( 5 ): 611 - 614 .
陈九九 , 冯春燕 , 郭彩丽 , 等 . 车联网中视频语义驱动的资源分配算法 [J ] . 通信学报 , 2021 , 42 ( 7 ): 1 - 11 .
CHEN J J , FENG C Y , GUO C L , et al . Video semantics-driven resource allocation algorithm in internet of vehicles [J ] . Journal of Communication , 2021 , 42 ( 7 ): 1 - 11 .
YE S , XU L , LI X . Vehicle-mounted self-organizing network routing algorithm based on deep reinforcement learning [J ] . Wireless Communications and Mobile Computing , 2021 ( 2021 ): 9934585 : 1 - 9 .
MLIKA Z , CHERKAOUI S . Network slicing with MEC and deep reinforcement learning for the internet of vehicles [J ] . IEEE Network , 2021 , 35 ( 3 ): 132 - 138 .
王晓昌 , 吴璠 , 孙彦赞 , 等 . 基于深度强化学习的车联网资源管理 [J ] . 工业控制计算机 , 2021 , 34 ( 9 ): 31 - 33 , 36 .
WANG X C , WU P , SUN Y Z , et al . Internet of vehicles resource management based on deep reinforcement learning [J ] . Industrial Personal Computer , 2021 , 34 ( 9 ): 31 - 33 , 36 .
王晓昌 , 吴璠 , 孙彦赞 , 等 . 基于联邦深度强化学习的车联网资源分配 [J ] . 电子测量技术 , 2021 , 44 ( 10 ): 114 - 120 .
WANG X C , WU P , SUN Y Z , et al . Internet of vehicles resource management based on federal deep reinforcement learning [J ] . Electronic Measurement Technology Journals , 2021 , 44 ( 10 ): 114 - 120 .
YE H , LI G Y . Deep reinforcement learning for resource allocation in V2V communications [C ] // Proceedings of 2018 IEEE International Conference on Communications (ICC) . Piscataway:IEEE Press , 2018 : 1 - 6 .
GYAWALI S , QIAN Y , HU R . Resource allocation in vehicular communications using graph and deep reinforcement learning [C ] // Proceedings of 2019 IEEE Global Communications Conference (GLOBECOM) . Piscataway:IEEE Press , 2019 : 1 - 6 .
CHEN X , WU C , CHEN T , et al . Age of information aware radio resource management in vehicular networks:a proactive deep reinforcement learning perspective [J ] . IEEE Transactions on Wireless Communications , 2020 , 19 ( 4 ): 2268 - 2281 .
QIAO G , LENG S , MAHARIAN S , et al . Deep reinforcement learning for cooperative content caching in vehicular edge computing and networks [J ] . IEEE Internet of Things Journal , 2019 , 7 ( 1 ): 247 - 257 .
ZHU M , LIU X Y , WANG X . Deep reinforcement learning for unmanned aerial vehicle-assisted vehicular networks [J ] . arXiv Preprint , 2019 ,arXiv,1906.05015.
韩双双 , 李卓珩 , 杨林瑶 , 等 . 基于强化学习和非正交多址接入的车联网无线资源分配 [C ] // 2019 中国自动化大会(CAC2019)论文集 . [出版地不详:出版者不详] , 2019 : 360 - 365 .
HAN S S , LI Z X , YANG L Y . Wireless resource allocation in vehicular networks based on reinforcement learning and NOMA [C ] // Proceedings of the China Automation Congress(CAC2019) .[S.l.:s.n. ] , 2019 : 360 - 365 .
SHARIF A , LI J , SALEEM M A , et al . A dynamic clustering technique based on deep reinforcement learning for internet of vehicles [J ] . Journal of Intelligent Manufacturing , 2021 , 32 ( 3 ): 757 - 768 .
黄煜梵 , 彭诺蘅 , 林艳 , 等 . 基于 SAC 强化学习的车联网频谱资源动态分配 [J ] . 计算机工程 , 2021 , 47 ( 9 ): 34 - 43 .
HUANG Y F , PENG N H , LIN Y , et al . Dynamic spectrum resource allocation in internet of vehicles based on SAC reinforcement learning [J ] . Computer Engineering , 2021 , 47 ( 9 ): 34 - 43 .
TIAN J , LIU Q , ZHANG H , et al . Multi-agent deep reinforcement learning based resource allocation for heterogeneous QoS guarantees for vehicular Networks [J ] . IEEE Internet of Things Journal , 2021 , 9 ( 3 ): 1683 - 1695 .
陈成瑞 , 孙宁 , 何世彪 , 等 . 面向 C-V2X 通信的基于深度学习的联合信道估计与均衡算法 [J ] . 计算机应用 , 2021 , 41 ( 9 ): 2687 - 2693 .
CHEN C R , SUN N , HE S B , et al . Deep learning-based joint channel estimation and equalization algorithm for C-V2X communications [J ] . Journal of Computer Applications , 2021 , 41 ( 9 ): 2687 - 2693 .
廖勇 , 田肖懿 , 蔡志镕 . 面向 C-V2I 的基于边缘计算的智能信道估计 [J ] . 电子学报 , 2021 , 49 ( 5 ): 833 - 842 .
LIAO Y , TIAN X Y , CAI Z R , et al . Intelligent channel estimation based on edge computing for C-V2I [J ] . Acta Electronica Sinica , 2021 , 49 ( 5 ): 833 - 842 .
ZHAO N , WU H , YU F R , et al . Deep-reinforcement- learning-based latency minimization in edge intelligence over vehicular networks [J ] . IEEE Internet of Things Journal , 2021 , 9 ( 2 ): 1300 - 1312 .
王汝言 , 梁颖杰 , 崔亚平 . 车辆网络多平台卸载智能资源分配算法 [J ] . 电子与信息学报 , 2020 , 42 ( 1 ): 263 - 270 .
WANG R Y , LIANG Y J , CUI Y P . Intelligent resource allocation algorithm for multi-platform offloading in vehicular networks [J ] . Journal of Electronics & Information Technology , 2020 , 42 ( 1 ): 263 - 270 .
ZHANG Y , ZHANG M , FAN C , et al . Computing resource allocation scheme of IoV using deep reinforcement learning in edge computing environment [J ] . EURASIP Journal on Advances in Signal Processing , 2021 , 2021 ( 1 ): 1 - 19 .
LEE S-S , LEE S . Resource allocation for vehicular fog computing using reinforcement learning combined with heuristic information [J ] . IEEE Internet of Things Journal , 2020 , 7 ( 10 ): 10450 - 10464 .
XIAO H , QIU C , YANG Q , et al . Deep reinforcement learning for optimal resource allocation in blockchain-based IoV secure systems [C ] // Proceedings of 2020 16th International Conference on Mobility,Sensing and Networking (MSN) .[S.l.:s.n. ] , 2020 : 137 - 144 .
董晓丹 , 吴琼 . 车载云计算系统中资源分配的优化方法 [J ] . 中国电子科学研究院学报 , 2020 , 15 ( 1 ): 92 - 98 .
DONG X D , WU Q . Optimization method of resource allocation in vehicular cloud computing system [J ] . Journal of China Academy of Electronics and Information Technology , 2020 , 15 ( 1 ): 92 - 98 .
李振江 , 张幸林 . 减少核心网拥塞的边缘计算资源分配和卸载决策 [J ] . 计算机科学 , 2021 , 48 ( 3 ): 281 - 288 .
LI Z J , ZHANG X L . Resource allocation and offloading decision of edge computing for reducing core network congestion [J ] . Computer Science , 2021 , 48 ( 3 ): 281 - 288 .
GAO D . Computing resource allocation strategy based on mobile edge computing in internet of vehicles environment [J ] . Mobile Information Systems , 2022 ( 2 ): 1 - 10 .
LIN X , WU J , MUMTAZ S , et al . Blockchain-based on-demand computing resource trading in IoV-assisted smart city [J ] . IEEE Transactions on Emerging Topics in Computing , 2020 , 9 ( 3 ): 1373 - 1385 .
张海波 , 荆昆仑 , 刘开健 , 等 . 车联网中一种基于软件定义网络与移动边缘计算的卸载策略 [J ] . 电子与信息学报 , 2020 , 42 ( 3 ): 645 - 652 .
ZHANG H B , JING K L , LIU K J , et al . An offloading mechanism based on software defined network and mobile edge computing in vehicular networks [J ] . Journal of Electronics &Information Technology , 2020 , 42 ( 3 ): 645 - 652 .
张海波 , 刘香渝 , 荆昆仑 , 等 . 车联网中基于NOMA-MEC的卸载策略研究 [J ] . 电子与信息学报 , 2021 , 43 ( 4 ): 1072 - 1079 .
ZHANG H B , LIU X Y , JING K L , et al . Research on NOMA-MEC-based offloading strategy in internet of vehicles [J ] . Journal of Electronics & Information Technology , 2021 , 43 ( 4 ): 1072 - 1079 .
LI F , LIN Y , PENG N , et al . Deep reinforcement learning based computing offloading for MEC-assisted heterogeneous vehicular networks [C ] // Proceedings of 2020 IEEE 20th International Conference on Communication Technology (ICCT) . Piscataway:IEEE Press , 2020 : 927 - 932 .
赵海涛 , 张唐伟 , 陈跃 , 等 . 基于DQN的车载边缘网络任务分发卸载算法 [J ] . 通信学报 , 2020 , 41 ( 10 ): 172 - 178 .
ZHAO H T , ZHANG T W , CHEN Y , et al . Task distribution offloading algorithm of vehicle edge network based on DQN [J ] . Journal on Communications , 2020 , 41 ( 10 ): 172 - 178 .
LI M , GAO J , ZHAO L , et al . Deep reinforcement learning for collaborative edge computing in vehicular networks [J ] . IEEE Transactions on Cognitive Communications and Networking , 2020 , 6 ( 4 ): 1122 - 1135 .
DAI Y , ZHANG K , MAHARJAN S , et al . Edge intelligence for energy-efficient computation offloading and resource allocation in 5G beyond [J ] . IEEE Transactions on Vehicular Technology , 2020 , 69 ( 10 ): 12175 - 12186 .
ZHAN W , LUO C , WANG J , et al . Deep-reinforcementlearning-based offloading scheduling for vehicular edge computing [J ] . IEEE Internet of Things Journal , 2020 , 7 ( 6 ): 5449 - 5465 .
LIU H , ZHAO H , GENG L , et al . A policy gradient based offloading scheme with dependency guarantees for vehicular networks [C ] // Proceedings of 2020 IEEE Globecom Workshops (GC Wkshps) . Piscataway:IEEE Press , 2020 : 1 - 6 .
WANG J , LV T , HUANG P , et al . Mobility-aware partial computation offloading in vehicular networks:a deep reinforcement learning based scheme [J ] . China Communications , 2020 , 17 ( 10 ): 31 - 49 .
许小龙 , 方子介 , 齐连永 , 等 . 车联网边缘计算环境下基于深度强化学习的分布式服务卸载方法 [J ] . 计算机学报 , 2021 , 44 ( 12 ): 2382 - 2405 .
XU X L , FANG Z J , QI L Y , et al . A deep reinforcement learning-based distributed service offloading method for edge computing empowered internet of vehicles [J ] . Journal of Computer Science and Technology , 2021 , 44 ( 12 ): 2382 - 2405 .
TANG D , ZHANG X , LI M , et al . Adaptive inference reinforcement learning for task offloading in vehicular edge computing systems [C ] // Proceedings of 2020 IEEE International Conference on Communications Workshops (ICC Workshops) . Piscataway:IEEE Press , 2020 : 1 - 6 .
ZHAO T , LIU Y , SHOU G , et al . Joint latency and energy consumption optimization with deep reinforcement learning for proximity detection in road networks [C ] // Proceedings of 2021 7th International Conference on Computer and Communications (ICCC) . Piscataway:IEEE Press , 2021 : 1272 - 1277 .
刘国志 , 代飞 , 莫启 , 等 . 车辆边缘计算环境下基于深度强化学习的服务卸载方法 [J ] . 计算机集成制造系统 , 2022 , 28 ( 10 ): 3304 - 3315 .
LIU G Z , DAI F , MO Q , et al . A service offloading method with deep reinforcement learning in edge computing empowered internet of vehicles [J ] . Computer Integrated Making System , 2022 , 28 ( 10 ): 3304 - 3315 .
SHI J , DU J , WANG J , et al . Distributed V2V computation offloading based on dynamic pricing using deep reinforcement learning [C ] // Proceedings of 2020 IEEE Wireless Communications and Networking Conference(WCNC) . Piscataway:IEEE Press , 2020 : 1 - 6 .
KE H , WANG J , DENG L , et al . Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks [J ] . IEEE Transactions on Vehicular Technology , 2020 , 69 ( 7 ): 7916 - 7929 .
GENG L , ZHAO H , LIU H , et al . Deep reinforcement learning-based computation offloading in vehicular networks [C ] // Proceedings of 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom) . Piscataway:IEEE Press , 2021 : 200 - 206 .
ZHAN W , LUO C , WANG J , et al . Deep reinforcement learning-based computation offloading in vehicular edge computing [C ] // Proceedings of 2019 IEEE Global Communications Conference (GLOBECOM) . Piscataway:IEEE Press , 2019 : 1 - 6 .
杨志和 , 鲁凌云 . 基于强化学习的车辆编队动态定价任务卸载策略 [J ] . 电子技术与软件工程 , 2022 ( 5 ): 45 - 51 .
YANG Z H , LU L Y . Task offloading strategy of vehicle platoon dynamic pricing based on reinforcement learning [J ] . Electronic Technology & Software Engineering , 2022 ( 5 ): 45 - 51 .
NI Y , HE J , CAI L , et al . Joint roadside unit deployment and service task assignment for internet of vehicles (IoV) [J ] . IEEE Internet of Things Journal , 2018 , 6 ( 2 ): 3271 - 3283 .
WU Z , LU Z , HUNG P C K , et al . QaMeC:a QoS-driven IoVs application optimizing deployment scheme in multimedia edge clouds [J ] . Future Generation Computer Systems , 2019 ( 92 ): 17 - 28 .
SHEN B , XU X , QI L , et al . Dynamic server placement in edge computing toward internet of vehicles [J ] . Computer Communications , 2021 ( 178 ): 114 - 123 .
XU X , SHEN B , YIN X , et al . Edge server quantification and placement for offloading social media services in industrial cognitive IoV [J ] . IEEE Transactions on Industrial Informatics , 2020 , 17 ( 4 ): 2910 - 2918 .
LU J , JIANG J , BALASUBRAMANIAN V , et al . Deep reinforcement learning-based multi-objective edge server placement in Internet of vehicles [J ] . Computer Communications , 2022 ( 187 ): 172 - 180 .
LYU W , XU X , QI L , et al . GoDeep:intelligent IoV service deployment and execution with privacy preservation in cloud-edge computing [C ] // Proceedings of 2021 IEEE International Conference on Web Services (ICWS) . Piscataway:IEEE Press , 2021 : 579 - 587 .
KASI M K , ABU G S , AKRAM R N , et al . Secure mobile edge server placement using multi-agent reinforcement learning [J ] . Electronics , 2021 , 10 ( 17 ): 2098 .
熊凯 , 冷甦鹏 , 张可 , 等 . 车联雾计算中的异构接入与资源分配算法研究 [J ] . 物联网学报 , 2019 , 3 ( 2 ): 20 - 27 .
XIONG K , LENG S P , ZHANG K , et al . Research on heterogeneous radio access and resource allocation algorithm in vehicular fog computing [J ] . Chinese Journal on Internet of Things , 2019 , 3 ( 2 ): 20 - 27 .
CUI Y , DU L , WANG H , et al . Reinforcement learning for joint optimization of communication and computation in vehicular networks [J ] . IEEE Transactions on Vehicular Technology , 2021 , 70 ( 12 ): 13062 - 13072 .
HE Y , ZHAO N , YIN H . Integrated networking,caching,and computing for connected vehicles:a deep reinforcement learning approach [J ] . IEEE Transactions on Vehicular Technology , 2017 , 67 ( 1 ): 44 - 55 .
LUO Q , LI C , LUAN T H , et al . Collaborative data scheduling for vehicular edge computing via deep reinforcement learning [J ] . IEEE Internet of Things Journal , 2020 , 7 ( 10 ): 9637 - 9650 .
YANG C , LIU B , LI H , et al . Learning based channel allocation and task offloading in temporary UAV-assisted vehicular edge computing networks [J ] . IEEE Transactions on Vehicular Technology , 2022 , 71 ( 9 ): 9884 - 9895 .
TAN G , ZHANG H , ZHOU S , et al . Resource allocation in MEC-enabled vehicular networks:a deep reinforcement learning approach [C ] // Proceedings of IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) . Piscataway:IEEE Press , 2020 : 406 - 411 .
LYU Z , WANG Y , LYU M , et al . Service-driven resource management in vehicular networks based on deep reinforcement learning [C ] // Proceedings of 2020 IEEE 31st Annual International Symposium on Personal,Indoor and Mobile Radio Communications . Piscataway:IEEE Press , 2020 : 1 - 6 .
PENG H , SHEN X . Multi-agent reinforcement learning based resource management in MEC-and UAV-assisted vehicular networks [J ] . IEEE Journal on Selected Areas in Communications , 2020 , 39 ( 1 ): 131 - 141 .
张家波 , 吕洁娜 , 甘臣权 , 等 . 一种基于强化学习的车联网边缘计算卸载策略 [J ] . 重庆邮电大学学报(自然科学版) , 2022 , 34 ( 3 ): 525 - 534 .
ZHANG J B , LYU J N , GAN C Q , et al . A reinforcement learning-based offloading strategy for internet of vehicles edge computing [J ] . Journal of Chongqing University of Posts and Telecommunications (Natural Science edition) , 2022 , 34 ( 3 ): 525 - 534 .
张海波 , 王子心 , 贺晓帆 . SDN和MEC架构下V2X卸载与资源分配 [J ] . 通信学报 , 2020 , 41 ( 1 ): 114 - 124 .
ZHANG H B , WANG Z X , HE X F . V2X offloading and resource allocation under SDN and MEC architecture [J ] . Journal of communication , 2020 , 41 ( 01 ): 114 - 124 .
LIU Y , YU H , XIE S , et al . Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks [J ] . IEEE Transactions on Vehicular Technology , 2019 , 68 ( 11 ): 11158 - 11168 .
KZAMI S.M.A. , OTOUM S , HUSSAIN R , et al . A novel deep reinforcement learning-based approach for task-offloading in vehicular networks [C ] // Proceedings of 2021 IEEE Global Communications Conference (GLOBECOM) . Piscataway:IEEE Press , 2021 : 1 - 6 .
PAN C , WANG Z , LIAO H J , et al . Asynchronous federated deep reinforcement learning-based URLLC-aware computation offloading in space-assisted vehicular networks [J ] . IEEE Transactions on Intelligent Transportation Systems , 2022 : 1 - 13 .
HAZARIKA B , SINGH K , BISWAS S , et al . DRL-based resource allocation for computation offloading in IoV networks [J ] . IEEE Transactions on Industrial Informatics , 2022 , 18 ( 11 ): 8027 - 8038 .
SHI J , DU J , WANG J , et al . Deep reinforcement learning-based V2V partial computation offloading in vehicular fog computing [C ] // Proceedings of 2021 IEEE Wireless Communications and Networking Conference (WCNC) . Piscataway:IEEE Press , 2021 : 1 - 6 .
HUANG X , HE L , CHEN X , et al . Revenue and energy efficiency-driven delay-constrained computing task offloading and resource allocation in a vehicular edge computing network:a deep reinforcement learning approach [J ] . IEEE Internet of Things Journal , 2022 , 9 ( 11 ): 8852 - 8868 .
ZHANG K , CAO J , ZHANG Y , et al . Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks [J ] . IEEE Transactions on Industrial Informatics , 2022 , 18 ( 2 ): 1405 - 1413 .
ZHANG X , PENG M , YAN S , et al . Joint communication and computation resource allocation in fog-based vehicular networks [J ] . IEEE Internet of Things Journal , 2022 , 9 ( 15 ): 13195 - 13208 .
0
浏览量
1195
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构