Open Access
ARTICLE
Hybrid Deep VGG-NET Convolutional Classifier for Video Smoke Detection
Department of Computer Science & Engineering, Institute of Technology, Guru Ghasidas University, Bilaspur, Chhattisgarh, 495009, India.
* Corresponding Author: Princy Matlani. Email: .
Computer Modeling in Engineering & Sciences 2019, 119(3), 427-458. https://doi.org/10.32604/cmes.2019.04985
Abstract
Real-time wild smoke detection utilizing machine based identification method is not produced proper accuracy, and it is not suitable for accurate prediction. However, various video smoke detection approaches involve minimum lighting, and it is required for the cameras to identify the existence of smoke particles in a scene. To overcome such challenges, our proposed work introduces a novel concept like deep VGG-Net Convolutional Neural Network (CNN) for the classification of smoke particles. This Deep Feature Synthesis algorithm automatically generated the characteristics for relational datasets. Also hybrid ABC optimization rectifies the problem related to the slow convergence since complexity is reduced. The proposed real-time algorithm uses some pre-processing for the image enhancement and next to the image enhancement processing; foreground and background regions are separated with Otsu thresholding. Here, to regulate the linear combination of foreground and background components alpha channel is applied to the image components. Here, Farneback optical flow evaluation technique diminishes the false finding rate and finally smoke particles are classified with the VGG-Net CNN classifier. In the end, the investigational outcome shows better statistical stability and performance regarding classification accuracy. The algorithm has better smoke detection performance among various video scenes.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.