Vol.68, No.1, 2021, pp.689-704, doi:10.32604/cmc.2021.015989
OPEN ACCESS
ARTICLE
Detecting Driver Distraction Using Deep-Learning Approach
  • Khalid A. AlShalfan1, Mohammed Zakariah2,*
1 College of Computer and Information Sciences, Al-Imam Muhammad Ibn Saud Islamic University, Riyadh, 11564, Saudi Arabia
2 College of Computer and Information Science, King Saud University, Riyadh, 11442, Saudi Arabia
* Corresponding Author: Mohammed Zakariah. Email:
(This article belongs to this Special Issue: Deep Learning Trends in Intelligent Systems)
Received 17 December 2020; Accepted 02 February 2021; Issue published 22 March 2021
Abstract
Currently, distracted driving is among the most important causes of traffic accidents. Consequently, intelligent vehicle driving systems have become increasingly important. Recently, interest in driver-assistance systems that detect driver actions and help them drive safely has increased. In these studies, although some distinct data types, such as the physical conditions of the driver, audio and visual features, and vehicle information, are used, the primary data source is images of the driver that include the face, arms, and hands taken with a camera inside the car. In this study, an architecture based on a convolution neural network (CNN) is proposed to classify and detect driver distraction. An efficient CNN with high accuracy is implemented, and to implement intense convolutional networks for large-scale image recognition, a new architecture was proposed based on the available Visual Geometry Group (VGG-16) architecture. The proposed architecture was evaluated using the StateFarm dataset for driver-distraction detection. This dataset is publicly available on Kaggle and is frequently used for this type of research. The proposed architecture achieved 96.95% accuracy.
Keywords
Deep learning; driver-distraction detection; convolution neural networks; VGG-16
Cite This Article
K. A. AlShalfan and M. Zakariah, "Detecting driver distraction using deep-learning approach," Computers, Materials & Continua, vol. 68, no.1, pp. 689–704, 2021.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.