Open Access iconOpen Access

ARTICLE

crossmark

Real-Time Implementation of Quadrotor UAV Control System Based on a Deep Reinforcement Learning Approach

Taha Yacine Trad1,*, Kheireddine Choutri1, Mohand Lagha1, Souham Meshoul2, Fouad Khenfri3, Raouf Fareh4, Hadil Shaiba5

1 Aeronautical Sciences Laboratory, Aeronautical and Spatial Studies Institute, Blida 1 University, Blida, 0900, Algeria
2 Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
3 Energy and Embedded Systems for Transportation Research Department, ESTACA-LAB, Montigny-Le-Bretonneux, 78066, France
4 Department of Electrical Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
5 Department of Computer Science, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia

* Corresponding Author: Taha Yacine Trad. Email: email-blida.dz

(This article belongs to the Special Issue: Intelligent Manufacturing, Robotics and Control Engineering)

Computers, Materials & Continua 2024, 81(3), 4757-4786. https://doi.org/10.32604/cmc.2024.055634

Abstract

The popularity of quadrotor Unmanned Aerial Vehicles (UAVs) stems from their simple propulsion systems and structural design. However, their complex and nonlinear dynamic behavior presents a significant challenge for control, necessitating sophisticated algorithms to ensure stability and accuracy in flight. Various strategies have been explored by researchers and control engineers, with learning-based methods like reinforcement learning, deep learning, and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control systems. This paper investigates a Reinforcement Learning (RL) approach for both high and low-level quadrotor control systems, focusing on attitude stabilization and position tracking tasks. A novel reward function and actor-critic network structures are designed to stimulate high-order observable states, improving the agent’s understanding of the quadrotor’s dynamics and environmental constraints. To address the challenge of RL hyperparameter tuning, a new framework is introduced that combines Simulated Annealing (SA) with a reinforcement learning algorithm, specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient (SA-TD3). This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods: Backstepping and Sliding Mode Control (SMC). While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing, a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini drone. The results showcase the potential of the proposed SA-TD3 framework for real-world applications, demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment.

Keywords

Deep reinforcement learning; hyper-parameters optimization; path following; quadrotor; twin delayed deep deterministic policy gradient and simulated annealing

Cite This Article

APA Style
Trad, T.Y., Choutri, K., Lagha, M., Meshoul, S., Khenfri, F. et al. (2024). Real-Time Implementation of Quadrotor UAV Control System Based on a Deep Reinforcement Learning Approach. Computers, Materials & Continua, 81(3), 4757–4786. https://doi.org/10.32604/cmc.2024.055634
Vancouver Style
Trad TY, Choutri K, Lagha M, Meshoul S, Khenfri F, Fareh R, et al. Real-Time Implementation of Quadrotor UAV Control System Based on a Deep Reinforcement Learning Approach. Comput Mater Contin. 2024;81(3):4757–4786. https://doi.org/10.32604/cmc.2024.055634
IEEE Style
T. Y. Trad et al., “Real-Time Implementation of Quadrotor UAV Control System Based on a Deep Reinforcement Learning Approach,” Comput. Mater. Contin., vol. 81, no. 3, pp. 4757–4786, 2024. https://doi.org/10.32604/cmc.2024.055634



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 742

    View

  • 367

    Download

  • 0

    Like

Share Link