Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (44)
  • Open Access

    ARTICLE

    Visual Detection Algorithms for Counter-UAV in Low-Altitude Air Defense

    Minghui Li1, Hongbo Li1,*, Jiaqi Zhu2, Xupeng Zhang1

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072406 - 12 January 2026

    Abstract To address the challenge of real-time detection of unauthorized drone intrusions in complex low-altitude urban environments such as parks and airports, this paper proposes an enhanced MBS-YOLO (Multi-Branch Small Target Detection YOLO) model for anti-drone object detection, based on the YOLOv8 architecture. To overcome the limitations of existing methods in detecting small objects within complex backgrounds, we designed a C2f-Pu module with excellent feature extraction capability and a more compact parameter set, aiming to reduce the model’s computational complexity. To improve multi-scale feature fusion, we construct a Multi-Branch Feature Pyramid Network (MB-FPN) that employs a… More >

  • Open Access

    ARTICLE

    CCLNet: An End-to-End Lightweight Network for Small-Target Forest Fire Detection in UAV Imagery

    Qian Yu1,2, Gui Zhang2,*, Ying Wang1, Xin Wu2, Jiangshu Xiao2, Wenbing Kuang1, Juan Zhang2

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072172 - 12 January 2026

    Abstract Detecting small forest fire targets in unmanned aerial vehicle (UAV) images is difficult, as flames typically cover only a very limited portion of the visual scene. This study proposes Context-guided Compact Lightweight Network (CCLNet), an end-to-end lightweight model designed to detect small forest fire targets while ensuring efficient inference on devices with constrained computational resources. CCLNet employs a three-stage network architecture. Its key components include three modules. C3F-Convolutional Gated Linear Unit (C3F-CGLU) performs selective local feature extraction while preserving fine-grained high-frequency flame details. Context-Guided Feature Fusion Module (CGFM) replaces plain concatenation with triplet-attention interactions to… More >

  • Open Access

    ARTICLE

    MFF-YOLO: A Target Detection Algorithm for UAV Aerial Photography

    Dike Chen1,2,3, Zhiyong Qin2, Ji Zhang2, Hongyuan Wang1,2,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-17, 2026, DOI:10.32604/cmc.2025.072494 - 09 December 2025

    Abstract To address the challenges of small target detection and significant scale variations in unmanned aerial vehicle (UAV) aerial imagery, which often lead to missed and false detections, we propose Multi-scale Feature Fusion YOLO (MFF-YOLO), an enhanced algorithm based on YOLOv8s. Our approach introduces a Multi-scale Feature Fusion Strategy (MFFS), comprising the Multiple Features C2f (MFC) module and the Scale Sequence Feature Fusion (SSFF) module, to improve feature integration across different network levels. This enables more effective capture of fine-grained details and sequential multi-scale features. Furthermore, we incorporate Inner-CIoU, an improved loss function that uses auxiliary More >

  • Open Access

    ARTICLE

    The Research on Low-Light Autonomous Driving Object Detection Method

    Jianhua Yang*, Zhiwei Lv, Changling Huo

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-18, 2026, DOI:10.32604/cmc.2025.068442 - 10 November 2025

    Abstract Aiming at the scale adaptation of automatic driving target detection algorithms in low illumination environments and the shortcomings in target occlusion processing, this paper proposes a YOLO-LKSDS automatic driving detection model. Firstly, the Contrast-Limited Adaptive Histogram Equalisation (CLAHE) image enhancement algorithm is improved to increase the image contrast and enhance the detailed features of the target; then, on the basis of the YOLOv5 model, the Kmeans++ clustering algorithm is introduced to obtain a suitable anchor frame, and SPPELAN spatial pyramid pooling is improved to enhance the accuracy and robustness of the model for multi-scale target… More >

  • Open Access

    ARTICLE

    A Method for Small Target Detection and Counting of the End of Drill Pipes Based on the Improved YOLO11n

    Miao Li1,2,*, Xiaojun Li1,3, Mingyang Zhao1,2

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1917-1936, 2025, DOI:10.32604/cmc.2025.067382 - 29 August 2025

    Abstract Aiming at problems such as large errors and low efficiency in manual counting of drill pipes during drilling depth measurement, an intelligent detection and counting method for the small targets at the end of drill pipes based on the improved YOLO11n is proposed. This method realizes the high-precision detection of targets at drill pipe ends in the image by optimizing the target detection model, and combines a post-processing correction mechanism to improve the drill pipe counting accuracy. In order to alleviate the low-precision problem of YOLO11n algorithm for small target recognition in the complex underground… More >

  • Open Access

    ARTICLE

    An Ochotona Curzoniae Object Detection Model Based on Feature Fusion with SCConv Attention Mechanism

    Haiyan Chen*, Rong Li

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5693-5712, 2025, DOI:10.32604/cmc.2025.065339 - 30 July 2025

    Abstract The detection of Ochotona Curzoniae serves as a fundamental component for estimating the population size of this species and for analyzing the dynamics of its population fluctuations. In natural environments, the pixels representing Ochotona Curzoniae constitute a small fraction of the total pixels, and their distinguishing features are often subtle, complicating the target detection process. To effectively extract the characteristics of these small targets, a feature fusion approach that utilizes up-sampling and channel integration from various layers within a CNN can significantly enhance the representation of target features, ultimately improving detection accuracy. However, the top-down… More >

  • Open Access

    ARTICLE

    Attention Shift-Invariant Cross-Evolutionary Feature Fusion Network for Infrared Small Target Detection

    Siqi Zhang, Shengda Pan*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4655-4676, 2025, DOI:10.32604/cmc.2025.064864 - 30 July 2025

    Abstract Infrared images typically exhibit diverse backgrounds, each potentially containing noise and target-like interference elements. In complex backgrounds, infrared small targets are prone to be submerged by background noise due to their low pixel proportion and limited available features, leading to detection failure. To address this problem, this paper proposes an Attention Shift-Invariant Cross-Evolutionary Feature Fusion Network (ASCFNet) tailored for the detection of infrared weak and small targets. The network architecture first designs a Multidimensional Lightweight Pixel-level Attention Module (MLPA), which alleviates the issue of small-target feature suppression during deep network propagation by combining channel reshaping,… More >

  • Open Access

    ARTICLE

    YOLO-LE: A Lightweight and Efficient UAV Aerial Image Target Detection Model

    Zhe Chen*, Yinyang Zhang, Sihao Xing

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1787-1803, 2025, DOI:10.32604/cmc.2025.065238 - 09 June 2025

    Abstract Unmanned aerial vehicle (UAV) imagery poses significant challenges for object detection due to extreme scale variations, high-density small targets (68% in VisDrone dataset), and complex backgrounds. While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion, their rigid architectures struggle with multi-scale adaptability, as exemplified by YOLOv8n’s 36.4% mAP and 13.9% small-object AP on VisDrone2019. This paper presents YOLO-LE, a lightweight framework addressing these limitations through three novel designs: (1) We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters, thereby improving More >

  • Open Access

    ARTICLE

    TransSSA: Invariant Cue Perceptual Feature Focused Learning for Dynamic Fruit Target Detection

    Jianyin Tang, Zhenglin Yu*, Changshun Shao

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 2829-2850, 2025, DOI:10.32604/cmc.2025.063287 - 16 April 2025

    Abstract In the field of automated fruit harvesting, precise and efficient fruit target recognition and localization play a pivotal role in enhancing the efficiency of harvesting robots. However, this domain faces two core challenges: firstly, the dynamic nature of the automatic picking process requires fruit target detection algorithms to adapt to multi-view characteristics, ensuring effective recognition of the same fruit from different perspectives. Secondly, fruits in natural environments often suffer from interference factors such as overlapping, occlusion, and illumination fluctuations, which increase the difficulty of image capture and recognition. To address these challenges, this study conducted… More >

  • Open Access

    ARTICLE

    Target Detection-Oriented RGCN Inference Enhancement Method

    Lijuan Zhang1,2, Xiaoyu Wang1,2, Songtao Zhang3, Yutong Jiang4,*, Dongming Li1, Weichen Sun4

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 1219-1237, 2025, DOI:10.32604/cmc.2025.059856 - 26 March 2025

    Abstract In this paper, a reasoning enhancement method based on RGCN (Relational Graph Convolutional Network) is proposed to improve the detection capability of UAV (Unmanned Aerial Vehicle) on fast-moving military targets in urban battlefield environments. By combining military images with the publicly available VisDrone2019 dataset, a new dataset called VisMilitary was built and multiple YOLO (You Only Look Once) models were tested on it. Due to the low confidence problem caused by fuzzy targets, the performance of traditional YOLO models on real battlefield images decreases significantly. Therefore, we propose an improved RGCN inference model, which improves More >

Displaying 1-10 on page 1 of 44. Per Page