Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.061363
Special Issues
Table of Content

Open Access

ARTICLE

DAFPN-YOLO: An Improved UAV-Based Object Detection Algorithm Based on YOLOv8s

Honglin Wang1, Yaolong Zhang2,*, Cheng Zhu3
1 School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
2 School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, 210044, China
3 Electrical & Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
* Corresponding Author: Yaolong Zhang. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.061363

Received 22 November 2024; Accepted 31 January 2025; Published online 27 February 2025

Abstract

UAV-based object detection is rapidly expanding in both civilian and military applications, including security surveillance, disaster assessment, and border patrol. However, challenges such as small objects, occlusions, complex backgrounds, and variable lighting persist due to the unique perspective of UAV imagery. To address these issues, this paper introduces DAFPN-YOLO, an innovative model based on YOLOv8s (You Only Look Once version 8s). The model strikes a balance between detection accuracy and speed while reducing parameters, making it well-suited for multi-object detection tasks from drone perspectives. A key feature of DAFPN-YOLO is the enhanced Drone-AFPN (Adaptive Feature Pyramid Network), which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information. To leverage Drone-AFPN’s multi-scale capabilities fully, a dedicated 160 × 160 small-object detection head was added, significantly boosting detection accuracy for small targets. In the backbone, the C2f_Dual (Cross Stage Partial with Cross-Stage Feature Fusion Dual) module and SPPELAN (Spatial Pyramid Pooling with Enhanced Local Attention Network) module were integrated. These components improve feature extraction and information aggregation while reducing parameters and computational complexity, enhancing inference efficiency. Additionally, Shape-IoU (Shape Intersection over Union) is used as the loss function for bounding box regression, enabling more precise shape-based object matching. Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness of DAFPN-YOLO. Compared to YOLOv8s, the proposed model achieves a 5.4 percentage point increase in mAP@0.5, a 3.8 percentage point improvement in mAP@0.5:0.95, and a 17.2% reduction in parameter count. These results highlight DAFPN-YOLO’s advantages in UAV-based object detection, offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.

Keywords

YOLOv8; UAV-based object detection; AFPN; small-object detection head; SPPELAN; DualConv; loss function
  • 181

    View

  • 54

    Download

  • 0

    Like

Share Link