Open Access iconOpen Access

ARTICLE

Drone-Based Public Surveillance Using 3D Point Clouds and Neuro-Fuzzy Classifier

Yawar Abbas1, Aisha Ahmed Alarfaj2, Ebtisam Abdullah Alabdulqader3, Asaad Algarni4, Ahmad Jalal1,5, Hui Liu6,*

1 Faculty of Computing and AI, Air University, Islamabad, 44000, Pakistan
2 Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
3 Department of Information Technology, College of Computer and Information Sciences, King Saud University, Riyadh, 12372, Saudi Arabia
4 Department of Computer Sciences, Faculty of Computing and Information Technology, Northern Border University, Rafha, 91911, Saudi Arabia
5 Department of Computer Science and Engineering, College of Informatics, Korea University, Seoul, 02841, Republic of Korea
6 Cognitive Systems Lab, University of Bremen, Bremen, 28359, Germany

* Corresponding Author: Hui Liu. Email: email

(This article belongs to the Special Issue: New Trends in Image Processing)

Computers, Materials & Continua 2025, 82(3), 4759-4776. https://doi.org/10.32604/cmc.2025.059224

Abstract

Human Activity Recognition (HAR) in drone-captured videos has become popular because of the interest in various fields such as video surveillance, sports analysis, and human-robot interaction. However, recognizing actions from such videos poses the following challenges: variations of human motion, the complexity of backdrops, motion blurs, occlusions, and restricted camera angles. This research presents a human activity recognition system to address these challenges by working with drones’ red-green-blue (RGB) videos. The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images. The YOLO (You Only Look Once) algorithm detects and extracts humans from each frame, obtaining their skeletons for further processing. The joint angles, displacement and velocity, histogram of oriented gradients (HOG), 3D points, and geodesic Distance are included. These features are optimized using Quadratic Discriminant Analysis (QDA) and utilized in a Neuro-Fuzzy Classifier (NFC) for activity classification. Real-world evaluations on the Drone-Action, Unmanned Aerial Vehicle (UAV)-Gesture, and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods. In particular, the system obtains recognition rates of 93% for drone action, 97% for UAV gestures, and 81% for Okutama-action, demonstrating the system’s reliability and ability to learn human activity from drone videos.

Keywords


Cite This Article

APA Style
Abbas, Y., Alarfaj, A.A., Alabdulqader, E.A., Algarni, A., Jalal, A. et al. (2025). Drone-based public surveillance using 3D point clouds and neuro-fuzzy classifier. Computers, Materials & Continua, 82(3), 4759–4776. https://doi.org/10.32604/cmc.2025.059224
Vancouver Style
Abbas Y, Alarfaj AA, Alabdulqader EA, Algarni A, Jalal A, Liu H. Drone-based public surveillance using 3D point clouds and neuro-fuzzy classifier. Comput Mater Contin. 2025;82(3):4759–4776. https://doi.org/10.32604/cmc.2025.059224
IEEE Style
Y. Abbas, A. A. Alarfaj, E. A. Alabdulqader, A. Algarni, A. Jalal, and H. Liu, “Drone-Based Public Surveillance Using 3D Point Clouds and Neuro-Fuzzy Classifier,” Comput. Mater. Contin., vol. 82, no. 3, pp. 4759–4776, 2025. https://doi.org/10.32604/cmc.2025.059224



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 167

    View

  • 69

    Download

  • 0

    Like

Share Link