西安工业大学学报2024,Vol.44Issue(6) :764-776.DOI:10.16185/j.jxatu.edu.cn.2024.06.303

深度强化学习的空天地架构移动边缘计算卸载策略

Mobile Edge Computing Offloading Strategy based on Deep Reinforcement Learning for Space-Air-Ground Integrated Networks

徐飞 王泽轩 宁辛
西安工业大学学报2024,Vol.44Issue(6) :764-776.DOI:10.16185/j.jxatu.edu.cn.2024.06.303

深度强化学习的空天地架构移动边缘计算卸载策略

Mobile Edge Computing Offloading Strategy based on Deep Reinforcement Learning for Space-Air-Ground Integrated Networks

徐飞 1王泽轩 1宁辛1
扫码查看

作者信息

  • 1. 西安工业大学计算机科学与工程学院,西安 710021
  • 折叠

摘要

针对传统无人机边缘计算卸载的网络时延高、所需能耗大、计算资源有限等问题,文中提出一种LEO-UAV辅助任务卸载的集成空天地网络架构,为地面设备提供更多可用的计算资源和网络需求.为了最大限度地降低卸载任务所产生的延迟和消耗的能量,将该问题转化为马尔可夫决策模型,进一步提出利用多智能体深度确定性策略梯度(MADDPG)算法来解决.实验结果表明,与基线算法相比,MADDPG算法能够有效缩短44.45%的系统卸载任务时延,节省61.35%的能耗,证实了 MADDPG算法在处理移动边缘计算卸载方面的可靠性.

Abstract

To address the problem of high network latency,high energy consumption,and limited computational resources in traditional drone-based edge computing offloading,the paper presents an integrated air-space-ground network architecture with Low Earth Orbit Unmanned Aerial Vehicle(LEO-UAV)assisted task offloading,which can provide more available computational resources and network services for ground devices.To minimize the delay and energy consumption produced in offloading tasks,the problem is formulated as a Markov Decision Process(MDP)and further solved by the Multi-Agent Deep Deterministic Policy Gradient algorithm.Experimental results demonstrate that the MADDPG algorithm can effectively reduce the system's task processing delay and energy consumption by 44.45%and 61.35%,respectively,which verifies the reliability of the MADDPG algorithm in handling mobile edge computing offloading tasks.

关键词

卫星网络/移动边缘计算/计算卸载/深度强化学习

Key words

satellite networks/Mobile Edge Computing(MEC)/computational offloading/deep reinforcement learning

引用本文复制引用

出版年

2024
西安工业大学学报
西安工业大学

西安工业大学学报

CSTPCDCHSSCD
影响因子:0.381
ISSN:1673-9965
段落导航相关论文