Open Access iconOpen Access

ARTICLE

crossmark

Leveraging Augmented Reality, Semantic-Segmentation, and VANETs for Enhanced Driver’s Safety Assistance

Sitara Afzal1, Imran Ullah Khan1, Irfan Mehmood2, Jong Weon Lee1,*

1 Mixed Reality and Interaction Lab, Department of Software, Sejong University, Seoul, 05006, Korea
2 Faculty of Engineering and Digital Technologies, School of Computer Science, AI and Electronics, University of Bradford, Bradford, UK

* Corresponding Author: Jong Weon Lee. Email: email

(This article belongs to the Special Issue: Deep Learning based Object Detection and Tracking in Videos)

Computers, Materials & Continua 2024, 78(1), 1443-1460. https://doi.org/10.32604/cmc.2023.046707

Abstract

Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead. However, limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers, leading to accidents and fatalities. In this paper, we consider atrous convolution, a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation. This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety. See-through vehicles leverage advanced technologies such as cameras, sensors, and displays to provide drivers with a real-time view of the vehicle ahead, including the areas hidden from their direct line of sight. To address the problems of safe passing and occlusion by huge vehicles, we designed a see-through vehicle system in this study, we employed a windshield display in the back car together with cameras in both cars. The server within the back car was used to segment the car, and the segmented portion of the car displayed the video from the front car. Our see-through system improves the driver’s field of vision and helps him change lanes, cross a large car that is blocking their view, and safely overtake other vehicles. Our network was trained and tested on the Cityscape dataset using semantic segmentation. This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured. For our findings, we have achieved 97.1% F1-score. The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios, including technical, regulatory, and user acceptance factors.

Keywords


Cite This Article

APA Style
Afzal, S., Khan, I.U., Mehmood, I., Lee, J.W. (2024). Leveraging augmented reality, semantic-segmentation, and vanets for enhanced driver’s safety assistance. Computers, Materials & Continua, 78(1), 1443-1460. https://doi.org/10.32604/cmc.2023.046707
Vancouver Style
Afzal S, Khan IU, Mehmood I, Lee JW. Leveraging augmented reality, semantic-segmentation, and vanets for enhanced driver’s safety assistance. Comput Mater Contin. 2024;78(1):1443-1460 https://doi.org/10.32604/cmc.2023.046707
IEEE Style
S. Afzal, I.U. Khan, I. Mehmood, and J.W. Lee, “Leveraging Augmented Reality, Semantic-Segmentation, and VANETs for Enhanced Driver’s Safety Assistance,” Comput. Mater. Contin., vol. 78, no. 1, pp. 1443-1460, 2024. https://doi.org/10.32604/cmc.2023.046707



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 636

    View

  • 241

    Download

  • 0

    Like

Share Link