Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (83)
  • Open Access

    ARTICLE

    Research on UAV–MEC Cooperative Scheduling Algorithms Based on Multi-Agent Deep Reinforcement Learning

    Yonghua Huo1,2, Ying Liu1,*, Anni Jiang3, Yang Yang3

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072681 - 12 January 2026

    Abstract With the advent of sixth-generation mobile communications (6G), space–air–ground integrated networks have become mainstream. This paper focuses on collaborative scheduling for mobile edge computing (MEC) under a three-tier heterogeneous architecture composed of mobile devices, unmanned aerial vehicles (UAVs), and macro base stations (BSs). This scenario typically faces fast channel fading, dynamic computational loads, and energy constraints, whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings. To address this issue, we formulate a multi-agent Markov decision process (MDP) for an air–ground-fused MEC system, unify link selection, bandwidth/power allocation, and task… More >

  • Open Access

    ARTICLE

    DRL-Based Task Scheduling and Trajectory Control for UAV-Assisted MEC Systems

    Sai Xu1,*, Jun Liu1,*, Shengyu Huang1, Zhi Li2

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.071865 - 12 January 2026

    Abstract In scenarios where ground-based cloud computing infrastructure is unavailable, unmanned aerial vehicles (UAVs) act as mobile edge computing (MEC) servers to provide on-demand computation services for ground terminals. To address the challenge of jointly optimizing task scheduling and UAV trajectory under limited resources and high mobility of UAVs, this paper presents PER-MATD3, a multi-agent deep reinforcement learning algorithm with prioritized experience replay (PER) into the Centralized Training with Decentralized Execution (CTDE) framework. Specifically, PER-MATD3 enables each agent to learn a decentralized policy using only local observations during execution, while leveraging a shared replay buffer with More >

  • Open Access

    ARTICLE

    AquaTree: Deep Reinforcement Learning-Driven Monte Carlo Tree Search for Underwater Image Enhancement

    Chao Li1,3,#, Jianing Wang1,3,#, Caichang Ding2,*, Zhiwei Ye1,3

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.071242 - 12 January 2026

    Abstract Underwater images frequently suffer from chromatic distortion, blurred details, and low contrast, posing significant challenges for enhancement. This paper introduces AquaTree, a novel underwater image enhancement (UIE) method that reformulates the task as a Markov Decision Process (MDP) through the integration of Monte Carlo Tree Search (MCTS) and deep reinforcement learning (DRL). The framework employs an action space of 25 enhancement operators, strategically grouped for basic attribute adjustment, color component balance, correction, and deblurring. Exploration within MCTS is guided by a dual-branch convolutional network, enabling intelligent sequential operator selection. Our core contributions include: (1) a More >

  • Open Access

    ARTICLE

    A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration

    Changcheng Li1,2, Weimeng Chang1,2, Dahai Zhang1,*, Jinghan He1

    Energy Engineering, Vol.123, No.1, 2026, DOI:10.32604/ee.2025.069389 - 27 December 2025

    Abstract Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts. This paper proposes a novel partitioning method based on deep reinforcement learning. First, the partitioning decision process is formulated as a Markov decision process (MDP) model to maximize the modularity. Corresponding key partitioning constraints on parallel restoration are considered. Second, based on the partitioning objective and constraints, the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function. The soft bonus scaling mechanism… More >

  • Open Access

    ARTICLE

    A Multi-Objective Adaptive Car-Following Framework for Autonomous Connected Vehicles with Deep Reinforcement Learning

    Abu Tayab1,*, Yanwen Li1, Ahmad Syed2, Ghanshyam G. Tejani3,4,*, Doaa Sami Khafaga5, El-Sayed M. El-kenawy6, Amel Ali Alhussan7, Marwa M. Eid8,9

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-27, 2026, DOI:10.32604/cmc.2025.070583 - 09 December 2025

    Abstract Autonomous connected vehicles (ACV) involve advanced control strategies to effectively balance safety, efficiency, energy consumption, and passenger comfort. This research introduces a deep reinforcement learning (DRL)-based car-following (CF) framework employing the Deep Deterministic Policy Gradient (DDPG) algorithm, which integrates a multi-objective reward function that balances the four goals while maintaining safe policy learning. Utilizing real-world driving data from the highD dataset, the proposed model learns adaptive speed control policies suitable for dynamic traffic scenarios. The performance of the DRL-based model is evaluated against a traditional model predictive control-adaptive cruise control (MPC-ACC) controller. Results show that the… More >

  • Open Access

    ARTICLE

    DRL-Based Cross-Regional Computation Offloading Algorithm

    Lincong Zhang1, Yuqing Liu1, Kefeng Wei2, Weinan Zhao1, Bo Qian1,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-18, 2026, DOI:10.32604/cmc.2025.069108 - 10 November 2025

    Abstract In the field of edge computing, achieving low-latency computational task offloading with limited resources is a critical research challenge, particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications. In scenarios where edge servers are sparsely deployed, the lack of coordination and information sharing often leads to load imbalance, thereby increasing system latency. Furthermore, in regions without edge server coverage, tasks must be processed locally, which further exacerbates latency issues. To address these challenges, we propose a novel and efficient Deep Reinforcement Learning (DRL)-based approach aimed at minimizing average… More >

  • Open Access

    ARTICLE

    Energy Optimization for Autonomous Mobile Robot Path Planning Based on Deep Reinforcement Learning

    Longfei Gao*, Weidong Wang, Dieyun Ke

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-15, 2026, DOI:10.32604/cmc.2025.068873 - 10 November 2025

    Abstract At present, energy consumption is one of the main bottlenecks in autonomous mobile robot development. To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments, this paper proposes an Attention-Enhanced Dueling Deep Q-Network (AD-Dueling DQN), which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework. A multi-objective reward function, centered on energy efficiency, is designed to comprehensively consider path length, terrain slope, motion smoothness, and obstacle avoidance, enabling optimal low-energy trajectory generation in 3D space from the… More >

  • Open Access

    ARTICLE

    A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles

    Junjun Ren1, Guoqiang Chen2, Zheng-Yi Chai3, Dong Yuan4,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-26, 2026, DOI:10.32604/cmc.2025.068795 - 10 November 2025

    Abstract Vehicle Edge Computing (VEC) and Cloud Computing (CC) significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit (RSU), thereby achieving lower delay and energy consumption. However, due to the limited storage capacity and energy budget of RSUs, it is challenging to meet the demands of the highly dynamic Internet of Vehicles (IoV) environment. Therefore, determining reasonable service caching and computation offloading strategies is crucial. To address this, this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading. By… More >

  • Open Access

    ARTICLE

    Enhancement of Frequency Regulation in AC-Excited Adjustable-Speed Pumped Storage Units during Pumping Operations

    Shuxin Tan1, Wei Yan2, Lei Zhao1, Xianglin Zhang3,*, Ziqiang Man2, Yu Lu2, Teng Liu2, Gaoyue Zhong2, Weiqun Liu2, Linjun Shi3

    Energy Engineering, Vol.122, No.12, pp. 5175-5197, 2025, DOI:10.32604/ee.2025.068692 - 27 November 2025

    Abstract The integration of large-scale renewable energy introduces frequency instability challenges due to inherent intermittency. While doubly-fed pumped storage units (DFPSUs) offer frequency regulation potential in pumping mode, conventional strategies fail to address hydraulic-mechanical coupling dynamics and operational constraints, limiting their effectiveness. This paper presents an innovative primary frequency control strategy for double-fed pumped storage units (DFPSUs) operating in pumping mode, integrating an adaptive parameter calculation method. This method is constrained by operational speed and power limits, addressing key performance factors. A dynamic model that incorporates the reversible pump-turbine characteristics is developed to translate frequency deviations… More >

  • Open Access

    ARTICLE

    Priority-Based Scheduling and Orchestration in Edge-Cloud Computing: A Deep Reinforcement Learning-Enhanced Concurrency Control Approach

    Mohammad A Al Khaldy1, Ahmad Nabot2, Ahmad Al-Qerem3,*, Mohammad Alauthman4, Amina Salhi5,*, Suhaila Abuowaida6, Naceur Chihaoui7

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.1, pp. 673-697, 2025, DOI:10.32604/cmes.2025.070004 - 30 October 2025

    Abstract The exponential growth of Internet of Things (IoT) devices has created unprecedented challenges in data processing and resource management for time-critical applications. Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems, while pure edge computing faces resource constraints that limit processing capabilities. This paper addresses these challenges by proposing a novel Deep Reinforcement Learning (DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments. Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency. The framework introduces three key innovations: (1)… More >

Displaying 1-10 on page 1 of 83. Per Page