Open Access iconOpen Access

ARTICLE

Task Offloading and Trajectory Optimization in UAV Networks: A Deep Reinforcement Learning Method Based on SAC and A-Star

by Jianhua Liu*, Peng Xie, Jiajia Liu, Xiaoguang Tu

Institute of Electronics and Electrical Engineering, Civil Aviation Flight University of China, Deyang, 618307, China

* Corresponding Author: Jianhua Liu. Email: email

(This article belongs to the Special Issue: Edge Computing Enabled Internet of Drones)

Computer Modeling in Engineering & Sciences 2024, 141(2), 1243-1273. https://doi.org/10.32604/cmes.2024.054002

Abstract

In mobile edge computing, unmanned aerial vehicles (UAVs) equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility, flexibility, rapid deployment, and terrain agnosticism. These attributes enable UAVs to reach designated areas, thereby addressing temporary computing swiftly in scenarios where ground-based servers are overloaded or unavailable. However, the inherent broadcast nature of line-of-sight transmission methods employed by UAVs renders them vulnerable to eavesdropping attacks. Meanwhile, there are often obstacles that affect flight safety in real UAV operation areas, and collisions between UAVs may also occur. To solve these problems, we propose an innovative A*SAC deep reinforcement learning algorithm, which seamlessly integrates the benefits of Soft Actor-Critic (SAC) and A* (A-Star) algorithms. This algorithm jointly optimizes the hovering position and task offloading proportion of the UAV through a task offloading function. Furthermore, our algorithm incorporates a path-planning function that identifies the most energy-efficient route for the UAV to reach its optimal hovering point. This approach not only reduces the flight energy consumption of the UAV but also lowers overall energy consumption, thereby optimizing system-level energy efficiency. Extensive simulation results demonstrate that, compared to other algorithms, our approach achieves superior system benefits. Specifically, it exhibits an average improvement of 13.18% in terms of different computing task sizes, 25.61% higher on average in terms of the power of electromagnetic wave interference intrusion into UAVs emitted by different auxiliary UAVs, and 35.78% higher on average in terms of the maximum computing frequency of different auxiliary UAVs. As for path planning, the simulation results indicate that our algorithm is capable of determining the optimal collision-avoidance path for each auxiliary UAV, enabling them to safely reach their designated endpoints in diverse obstacle-ridden environments.

Keywords


Cite This Article

APA Style
Liu, J., Xie, P., Liu, J., Tu, X. (2024). Task offloading and trajectory optimization in UAV networks: A deep reinforcement learning method based on SAC and a-star. Computer Modeling in Engineering & Sciences, 141(2), 1243-1273. https://doi.org/10.32604/cmes.2024.054002
Vancouver Style
Liu J, Xie P, Liu J, Tu X. Task offloading and trajectory optimization in UAV networks: A deep reinforcement learning method based on SAC and a-star. Comput Model Eng Sci. 2024;141(2):1243-1273 https://doi.org/10.32604/cmes.2024.054002
IEEE Style
J. Liu, P. Xie, J. Liu, and X. Tu, “Task Offloading and Trajectory Optimization in UAV Networks: A Deep Reinforcement Learning Method Based on SAC and A-Star,” Comput. Model. Eng. Sci., vol. 141, no. 2, pp. 1243-1273, 2024. https://doi.org/10.32604/cmes.2024.054002



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 336

    View

  • 184

    Download

  • 0

    Like

Share Link