Open Access iconOpen Access

ARTICLE

crossmark

Distributed Graph Database Load Balancing Method Based on Deep Reinforcement Learning

by Shuming Sha1,2, Naiwang Guo3, Wang Luo1,2, Yong Zhang1,2,*

1 Nanjing NARI Information & Communication Technology Co., Ltd., Nanjing, 210032, China
2 State Grid Electric Power Research Institute, Nanjing, 211106, China
3 State Grid Shanghai Municipal Eleciric Power Company, Shanghai, 200122, China

* Corresponding Author: Yong Zhang. Email: email

Computers, Materials & Continua 2024, 79(3), 5105-5124. https://doi.org/10.32604/cmc.2024.049584

Abstract

This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies. Unlike independent batch tasks, workflows typically consist of multiple subtasks with intrinsic correlations and dependencies. It necessitates the distribution of various computational tasks to appropriate computing node resources in accordance with task dependencies to ensure the smooth completion of the entire workflow. Workflow scheduling must consider an array of factors, including task dependencies, availability of computational resources, and the schedulability of tasks. Therefore, this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning (DRL). The method optimizes the maximum completion time (makespan) and response time of workflow tasks, aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan. The experimental results indicate that the Q-learning Deep Reinforcement Learning (Q-DRL) algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments. In quantifying makespan, Q-DRL achieves mean reductions of 12.4% and 11.9% over established First-fit and Random scheduling strategies, respectively. Additionally, Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network (IDQN) algorithms, with improvements standing at 4.4% and 2.6%, respectively. With reference to average response time, the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks, decreasing the average by 2.27% and 4.71% when compared to IDQN and DRL-Cloud, respectively. The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization, reducing the average idle rate by 5.02% and 9.30% in comparison to IDQN and DRL-Cloud, respectively. These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time, thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.

Keywords


Cite This Article

APA Style
Sha, S., Guo, N., Luo, W., Zhang, Y. (2024). Distributed graph database load balancing method based on deep reinforcement learning. Computers, Materials & Continua, 79(3), 5105-5124. https://doi.org/10.32604/cmc.2024.049584
Vancouver Style
Sha S, Guo N, Luo W, Zhang Y. Distributed graph database load balancing method based on deep reinforcement learning. Comput Mater Contin. 2024;79(3):5105-5124 https://doi.org/10.32604/cmc.2024.049584
IEEE Style
S. Sha, N. Guo, W. Luo, and Y. Zhang, “Distributed Graph Database Load Balancing Method Based on Deep Reinforcement Learning,” Comput. Mater. Contin., vol. 79, no. 3, pp. 5105-5124, 2024. https://doi.org/10.32604/cmc.2024.049584



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 388

    View

  • 192

    Download

  • 0

    Like

Share Link