Open Access
ARTICLE
Lightweight YOLOM-Net for Automatic Identification and Real-Time Detection of Fatigue Driving
1 School of Electronics and Control Engineering, Chang’an University, Xi’an, 710064, China
2 Digital Business Department, Shaanxi Expressway Engineering Testing Inspection & Testing Co., Ltd., Xi’an, 710086, China
3 School of Energy and Electrical Engineering, Chang’an University, Xi’an, 710064, China
* Corresponding Authors: Yaxue Peng. Email: ; Gang Li. Email:
(This article belongs to the Special Issue: Artificial Intelligence Algorithms and Applications)
Computers, Materials & Continua 2025, 82(3), 4995-5017. https://doi.org/10.32604/cmc.2025.059972
Received 21 October 2024; Accepted 23 December 2024; Issue published 06 March 2025
Abstract
In recent years, the country has spent significant workforce and material resources to prevent traffic accidents, particularly those caused by fatigued driving. The current studies mainly concentrate on driver physiological signals, driving behavior, and vehicle information. However, most of the approaches are computationally intensive and inconvenient for real-time detection. Therefore, this paper designs a network that combines precision, speed and lightweight and proposes an algorithm for facial fatigue detection based on multi-feature fusion. Specifically, the face detection model takes YOLOv8 (You Only Look Once version 8) as the basic framework, and replaces its backbone network with MobileNetv3. To focus on the significant regions in the image, CPCA (Channel Prior Convolution Attention) is adopted to enhance the network’s capacity for feature extraction. Meanwhile, the network training phase employs the Focal-EIOU (Focal and Efficient Intersection Over Union) loss function, which makes the network lightweight and increases the accuracy of target detection. Ultimately, the Dlib toolkit was employed to annotate 68 facial feature points. This study established an evaluation metric for facial fatigue and developed a novel fatigue detection algorithm to assess the driver’s condition. A series of comparative experiments were carried out on the self-built dataset. The suggested method’s mAP (mean Average Precision) values for object detection and fatigue detection are 96.71% and 95.75%, respectively, as well as the detection speed is 47 FPS (Frames Per Second). This method can balance the contradiction between computational complexity and model accuracy. Furthermore, it can be transplanted to NVIDIA Jetson Orin NX and quickly detect the driver’s state while maintaining a high degree of accuracy. It contributes to the development of automobile safety systems and reduces the occurrence of traffic accidents.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.