Open Access
ARTICLE
Multi-Agent Dynamic Area Coverage Based on Reinforcement Learning with Connected Agents
1 STM Defence Technologies Engineering and Trade. Inc., Ankara, 06560, Turkey
2 Department of Computer Engineering, Faculty of Technology, Gazi University, Ankara, 06500, Turkey
* Corresponding Author: Aydin Cetin. Email:
Computer Systems Science and Engineering 2023, 45(1), 215-230. https://doi.org/10.32604/csse.2023.031116
Received 11 April 2022; Accepted 09 June 2022; Issue published 16 August 2022
Abstract
Dynamic area coverage with small unmanned aerial vehicle (UAV) systems is one of the major research topics due to limited payloads and the difficulty of decentralized decision-making process. Collaborative behavior of a group of UAVs in an unknown environment is another hard problem to be solved. In this paper, we propose a method for decentralized execution of multi-UAVs for dynamic area coverage problems. The proposed decentralized decision-making dynamic area coverage (DDMDAC) method utilizes reinforcement learning (RL) where each UAV is represented by an intelligent agent that learns policies to create collaborative behaviors in partially observable environment. Intelligent agents increase their global observations by gathering information about the environment by connecting with other agents. The connectivity provides a consensus for the decision-making process, while each agent takes decisions. At each step, agents acquire all reachable agents’ states, determine the optimum location for maximal area coverage and receive reward using the covered rate on the target area, respectively. The method was tested in a multi-agent actor-critic simulation platform. In the study, it has been considered that each UAV has a certain communication distance as in real applications. The results show that UAVs with limited communication distance can act jointly in the target area and can successfully cover the area without guidance from the central command unit.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.