Home / Journals / CMC / Online First / doi:10.32604/cmc.2024.051408
Special Issues
Table of Content

Open Access

ARTICLE

Distributed Resource Allocation in Dispersed Computing Environment Based on UAV Track Inspection in Urban Rail Transit

Tong Gan1, Shuo Dong1, Shiyou Wang1, Jiaxin Li2,*
1 Division of Consulting, Beijing Metro Consultancy Corporation Ltd., Beijing, 100037, China
2 Department of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
* Corresponding Author: Jiaxin Li. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2024.051408

Received 05 March 2024; Accepted 16 May 2024; Published online 11 June 2024

Abstract

With the rapid development of urban rail transit, the existing track detection has some problems such as low efficiency and insufficient detection coverage, so an intelligent and automatic track detection method based on UAV is urgently needed to avoid major safety accidents. At the same time, the geographical distribution of IoT devices results in the inefficient use of the significant computing potential held by a large number of devices. As a result, the Dispersed Computing (DCOMP) architecture enables collaborative computing between devices in the Internet of Everything (IoE), promotes low-latency and efficient cross-wide applications, and meets users’ growing needs for computing performance and service quality. This paper focuses on examining the resource allocation challenge within a dispersed computing environment that utilizes UAV inspection tracks. Furthermore, the system takes into account both resource constraints and computational constraints and transforms the optimization problem into an energy minimization problem with computational constraints. The Markov Decision Process (MDP) model is employed to capture the connection between the dispersed computing resource allocation strategy and the system environment. Subsequently, a method based on Double Deep Q-Network (DDQN) is introduced to derive the optimal policy. Simultaneously, an experience replay mechanism is implemented to tackle the issue of increasing dimensionality. The experimental simulations validate the efficacy of the method across various scenarios.

Keywords

UAV track inspection; dispersed computing; resource allocation; deep reinforcement learning; Markov decision process
  • 151

    View

  • 18

    Download

  • 0

    Like

Share Link