Open Access
ARTICLE
Visibility Enhancement of Scene Images Degraded by Foggy Weather Condition: An Application to Video Surveillance
1 Department of Computer Science, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, 44000, Pakistan
2 Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, 21944, Saudi Arabia
3 College of Applied Computer Science, King Saud University (Almuzahmiyah Campus), Riyadh, 11543, Saudi Arabia
4 Department of Computer Science and Information Systems, College of Business Studies, PAAET, 12062, Kuwait
* Corresponding Author: Abdulrahman M. Qahtani. Email:
(This article belongs to the Special Issue: Recent Advances in Deep Learning, Information Fusion, and Features Selection for Video Surveillance Application)
Computers, Materials & Continua 2021, 68(3), 3465-3481. https://doi.org/10.32604/cmc.2021.017454
Received 30 January 2021; Accepted 08 March 2021; Issue published 06 May 2021
Abstract
In recent years, video surveillance application played a significant role in our daily lives. Images taken during foggy and haze weather conditions for video surveillance application lose their authenticity and hence reduces the visibility. The reason behind visibility enhancement of foggy and haze images is to help numerous computer and machine vision applications such as satellite imagery, object detection, target killing, and surveillance. To remove fog and enhance visibility, a number of visibility enhancement algorithms and methods have been proposed in the past. However, these techniques suffer from several limitations that place strong obstacles to the real world outdoor computer vision applications. The existing techniques do not perform well when images contain heavy fog, large white region and strong atmospheric light. This research work proposed a new framework to defog and dehaze the image in order to enhance the visibility of foggy and haze images. The proposed framework is based on a Conditional generative adversarial network (CGAN) with two networks; generator and discriminator, each having distinct properties. The generator network generates fog-free images from foggy images and discriminator network distinguishes between the restored image and the original fog-free image. Experiments are conducted on FRIDA dataset and haze images. To assess the performance of the proposed method on fog dataset, we use PSNR and SSIM, and for Haze dataset use e, r−, and σ as performance metrics. Experimental results shows that the proposed method achieved higher values of PSNR and SSIM which is 18.23, 0.823 and lower values produced by the compared method which are 13.94, 0.791 and so on. Experimental results demonstrated that the proposed framework Has removed fog and enhanced the visibility of foggy and hazy images.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.