Open Access
ARTICLE
Detection and Classification of Diabetic Retinopathy Using DCNN and BSN Models
Department of ECE, SRC, SASTRA Deemed University, Kumbakonam, India
* Corresponding Author: S. Sudha. Email:
Computers, Materials & Continua 2022, 72(1), 597-609. https://doi.org/10.32604/cmc.2022.024065
Received 02 October 2021; Accepted 21 December 2021; Issue published 24 February 2022
Abstract
Diabetes is associated with many complications that could lead to death. Diabetic retinopathy, a complication of diabetes, is difficult to diagnose and may lead to vision loss. Visual identification of micro features in fundus images for the diagnosis of DR is a complex and challenging task for clinicians. Because clinical testing involves complex procedures and is time-consuming, an automated system would help ophthalmologists to detect DR and administer treatment in a timely manner so that blindness can be avoided. Previous research works have focused on image processing algorithms, or neural networks, or signal processing techniques alone to detect diabetic retinopathy. Therefore, we aimed to develop a novel integrated approach to increase the accuracy of detection. This approach utilized both convolutional neural networks and signal processing techniques. In this proposed method, the biological electro retinogram (ERG) sensor network (BSN) and deep convolution neural network (DCNN) were developed to detect and classify DR. In the BSN system, electrodes were used to record ERG signal, which was pre-processed to be noise-free. Processing was performed in the frequency domain by the application of fast Fourier transform (FFT) and mel frequency cepstral coefficients (MFCCs) were extracted. Artificial neural network (ANN) classifier was used to classify the signals of eyes with DR and normal eye. Additionally, fundus images were captured using a fundus camera, and these were used as the input for DCNN-based analysis. The DCNN consisted of many layers to facilitate the extraction of features and classification of fundus images into normal images, non-proliferative DR (NPDR) or early-stage DR images, and proliferative DR (PDR) or advanced-stage DR images. Furthermore, it classified NPDR according to microaneurysms, hemorrhages, cotton wool spots, and exudates, and the presence of new blood vessels indicated PDR. The accuracy, sensitivity, and specificity of the ANN classifier were found to be 94%, 95%, and 93%, respectively. Both the accuracy rate and sensitivity rate of the DCNN classifier was 96.5% for the images acquired from various hospitals as well as databases. A comparison between the accuracy rates of BSN and DCNN approaches showed that DCNN with fundus images decreased the error rate to 4%.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.