Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique. The increase in retinal diseases is alarming as it may lead to permanent blindness if left untreated. Automation of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also. Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted feature selection or binary classification. This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images. For this research, the data has been collected and combined from three distinct sources. The images are preprocessed for enhancing the details. Six layers of the convolutional neural network (CNN) are used for the automated feature extraction and classification of 20 retinal diseases. It is observed that the results are reliant on the number of classes. For binary classification (healthy
The retina is a thin layer of tissue located at the back wall of a human eye and contributes to vision formation. It comprises millions of light-sensitive nerve cells, optic nerve and macula. Any retinal disorder can interrupt its function, leading to gradual vision loss or blindness. The disorders can occur in the macula, optic nerve, or retinal vessels. With the increase in life expectancy of the population, the number of patients suffering from chorioretinal diseases has also increased [
With the advancements in technology, computer systems can be used to automate the diagnosis process of eye diseases. This helps in replacing the manual analysis of ophthalmic images by the ophthalmologists. Time and effort taken in manual diagnosis can, therefore, be saved and utilized for the treatment of the disease. Since the rise of machine learning techniques, computers are capable of training themselves from environment and perform appropriate actions according to circumstances just as a human brain would do. Several researchers suggested various methods of machine learning to classify eye diseases. They proposed the use of artificial neural networks (ANN), support vector machines (SVM), Naive Bayes, K-nearest neighbors (KNN), in addition to many more, to classify different retinal diseases [
Although machine learning techniques have outperformed several manual and traditional approaches yet they have some limitations. The major issue with these methods is their support of hand-crafted features. The selection of features may vary and affect results each time. Moreover, they are expensive and become a bottleneck for traditional machine learning methods. Deep learning algorithms overcome the limitations posed by traditional machine learning algorithms. It is a subset of machine learning known as “deep neural network” because of the association of multiple layers of neural network [
Different researchers came up with deep learning approaches for feature extraction and selection. They worked on Deep Belief Networks (DBN), CNN and pre-trained transfer learning models like inception V3 and AlexNet for the classification of retinal diseases. They contributed to the automation of Age-related Macular Degeneration (AMD) diagnosis [
This paper presents a deep learning approach to classify fundus retinal images for multi-category data after performing a few preprocessing steps on the images. The deep learning approach proposed here does not require the extraction and selection of hand-crafted features. The paper is organized as follows: details of the dataset used by this study, the preprocessing steps and the structure of the proposed deep learning model are presented in the Materials and Methods section. Outcomes are organized in the Results section and the findings of this research compared with the literature are presented in the Discussion section. The paper ends with a conclusion.
Insights of the dataset, data preprocessing operations and the CNN model proposed in this study are presented in this section.
Publicly accessible retinal images datasets are smaller in size and are insufficient for multiple retinal disease classification. These datasets either contain merely a couple of retinal diseases or just stages of a particular single disease. Deep learning models demand a huge measure of information to train them. This research, therefore, consolidated data by taking retinal images from three distinct resources including project of STARE (Structured Analysis of the Retina) (
Retinal diseases covered in this research are AMD, Blur Fundus, Coat’s disease, Mild DR, Moderate DR, Proliferative DR (PDR), Severe DR, Drusen, Glaucoma, Hypertensive Retinopathy (HR), Maculopathy, Pathologic Myopia (PM), Retinal Artery Occlusion (RAO), Retinitis Pigmentosa (RP), Rhegmatogenous Retinal Detachment (RRD), Retinal Vein Occlusion (RVO)–(i) Branch Retinal Vein Occlusion (BRVO) and (ii) Central Retinal Vein Occlusion (CRVO), Silicone Oil in eye and Yellow White spots. The proposed CNN model is capable of distinguishing among the diseases as well as their stages. A normal control class is also included for distinction in addition to 19 case classes. Retinal diseases included in this research and the number of instances against each disease/stage are presented in
Class no. | Disease | Images per class | Class no. | Disease | Images per class | Class no. | Disease | Images per class |
---|---|---|---|---|---|---|---|---|
1. | AMD | 41 | 8. | Drusen | 30 | 15. | RP | 38 |
2. | Blur fundus | 45 | 9. | Glaucoma | 28 | 16. | RRD | 57 |
3. | Coat’s disease | 34 | 10. | HR | 22 | 17. | RVO-BRVO | 54 |
4. | Mild DR | 57 | 11. | Maculopathy | 74 | 18. | RVO-CRVO | 54 |
5. | Moderate DR | 84 | 12. | PM | 54 | 19. | Silicone oil in eye | 19 |
6. | Proliferative DR | 72 | 13. | Normal | 123 | 20. | Yellow white spots | 30 |
7. | Severe DR | 52 | 14. | RAO | 32 |
Fundus images in the dataset vary in size as these have been collected from different sources. The first step in preprocessing is resizing and scaling of the images. Images with higher resolution generate more parameters while going through CNN layers, requiring more space to reside in memory. Moreover, the learning model must handle all those parameters resulting in higher computational costs. Generally, a large set of parameters and a small number of training instances may lead to overfitting. ImageNet pre-trained models use images with a resolution of 224 × 224. Therefore, all images in the dataset are resized to a resolution of 224 × 224 to save memory and computational cost.
The second task is the selection of green channel from RGB fundus images. Processing images with three channels demand more computational cost. Conversion is possible either by splitting channels or by grayscale conversion. In fundus images, the vascular plane of the retina, macula, optic nerve and blood vessels must be visible for the analysis and diagnosis of the disease. Green channel of an image provides more details that can be useful in the diagnosis process. The red, green and blue channels as well as grayscale images are presented in
Most retinal diseases affect blood vessels which are extremely thin in nature. Sharpening operation enhances these vessels by convolving the image
It is observed that the macula is always dull and the optic disc is bright in fundus images. Contrast adjustment rectifies the illumination effect on vessels, macula and optic disc. Contrast Limited Adaptive Histogram Equalization (CLAHE) has been performed to enhance the low-contrast regions by clipping limit to avoid noise.
A deep learning model to classify multi-retinal diseases, comprising various CNN layers, is presented in this paper. The flow of preprocessing steps and the deep learning model are depicted in
The presented deep learning model is constituted using different CNN layers. CNN is a well-known deep learning model, especially for the classification of images. Various researchers have presented its variations by changing the number and arrangement of CNN layers. However, its fundamental layers i.e., convolutional layers, activation functions, pooling layers and fully connected layers remain almost the same [
In each convolutional layer, input images are convolved with 32 filters of size 3 × 3 to extract features from them as expressed in
ReLU activation function has been applied after each convolutional layer to activate the function depending on the values of
The task of the pooling layer is to reduce the resolution of the images. Max pooling algorithm has been applied since it reserves a maximum value in a pooling region with a stride size of 2 for downsampling. Dropout layer is added to the model with dropout rate of 0.5 to avoid overfitting [
Layers of proposed model | Output shape |
---|---|
Input | 224 × 224 × 1 |
3 × 3 Convolution layer + ReLU | 222 × 222 × 32 |
3 × 3 Convolution layer + ReLU | 220 × 220 × 32 |
2 × 2 max-pooling | 110 × 110 × 32 |
Dropout (0.5) | 110 × 110 × 32 |
Flattened | 387200 |
Fully connected | 128 |
RELU + Dropout (0.5) | 128 |
Softmax | 20 |
The proposed model has been implemented with 10-fold cross-validation on 1000 images belonging to 20 categories. Once a model is structured and implemented, its correctness and reliability can be assessed using various evaluation metrics. Accuracy is one such metric that reflects the correctly classified instances. However, the retinal fundus imaging dataset used in this research has imbalanced classes; therefore, model evaluation based only on accuracy is not enough. Two evaluation metrics i.e., sensitivity and specificity are also considered. Sensitivity measures the strength of the model for correctly classifying true diseased subjects, whereas, specificity measures the competence of the model for correctly identifying those subjects that do not belong to a particular class.
This study has three assessment dimensions. Initially, all images are classified into binary categories after preprocessing. Unfortunately, the model has not learned features appropriately for normal
AMD | Blur Fundus | Coat | Mild DR | Moderate DR | PDR | Severe DR | Drusen | Glaucoma | HR | Maculopathy | PM | Normal | RAO | RP | RRD | BRVO | CRVO | Silicone Oil | YWS | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AMD | 100 | 96 | 100 | 100 | 97 | 100 | 57 | 100 | 65 | 65 | 97 | 100 | 82 | 100 | 100 | 58 | 41 | 68 | 56 | |
Blur fundus | 100 | 88 | 100 | 79 | 85 | 97 | 60 | 88 | 68 | 62 | 90 | 100 | 60 | 77 | 74 | 94 | 82 | 71 | 60 | |
Coat | 96 | 88 | 100 | 78 | 69 | 90 | 90 | 100 | 94 | 69 | 90 | 100 | 52 | 48 | 62 | 60 | 61 | 65 | 90 | |
Mild DR | 100 | 100 | 100 | 98 | 100 | 86 | 100 | 96 | 73 | 100 | 51 | 92 | 100 | 61 | 100 | 94 | 48 | 76 | 66 | |
Moderate DR | 100 | 79 | 78 | 98 | 96 | 82 | 73 | 76 | 80 | 54 | 61 | 98 | 73 | 70 | 81 | 58 | 59 | 56 | 95 | |
PDR | 97 | 85 | 69 | 100 | 96 | 85 | 70 | 73 | 77 | 100 | 90 | 97 | 70 | 60 | 83 | 81 | 57 | 80 | 71 | |
Severe DR | 100 | 97 | 90 | 86 | 82 | 85 | 100 | 65 | 100 | 41 | 94 | 51 | 96 | 100 | 53 | 48 | 100 | 74 | 63 | |
Drusen | 57 | 60 | 90 | 100 | 73 | 70 | 100 | 100 | 76 | 100 | 64 | 100 | 50 | 54 | 66 | 64 | 64 | 62 | 95 | |
Glaucoma | 100 | 88 | 100 | 96 | 76 | 73 | 65 | 100 | 100 | 73 | 96 | 67 | 100 | 100 | 68 | 67 | 67 | 80 | 52 | |
HR | 65 | 68 | 94 | 73 | 80 | 77 | 100 | 76 | 100 | 77 | 72 | 72 | 58 | 63 | 73 | 96 | 72 | 100 | 100 | |
Maculopathy | 65 | 62 | 69 | 100 | 54 | 100 | 41 | 100 | 73 | 77 | 57 | 93 | 70 | 94 | 56 | 57 | 57 | 80 | 70 | |
PM | 97 | 90 | 90 | 51 | 61 | 90 | 94 | 64 | 96 | 72 | 57 | 50 | 64 | 96 | 95 | 97 | 100 | 75 | 64 | |
Normal | 100 | 100 | 100 | 92 | 98 | 97 | 51 | 100 | 67 | 72 | 93 | 50 | 64 | 100 | 94 | 92 | 50 | 92 | 96 | |
RAO | 82 | 60 | 52 | 100 | 73 | 70 | 96 | 50 | 100 | 58 | 70 | 64 | 64 | 77 | 89 | 64 | 75 | 62 | 100 | |
RP | 100 | 77 | 48 | 61 | 70 | 60 | 100 | 54 | 100 | 63 | 94 | 96 | 100 | 77 | 100 | 90 | 60 | 67 | 95 | |
RRD | 100 | 74 | 62 | 100 | 81 | 83 | 53 | 66 | 68 | 73 | 56 | 95 | 94 | 89 | 100 | 51 | 100 | 76 | 66 | |
BRVO | 58 | 94 | 60 | 94 | 58 | 81 | 48 | 64 | 67 | 96 | 57 | 97 | 92 | 64 | 90 | 51 | 86 | 75 | 86 | |
CRVO | 41 | 82 | 61 | 48 | 59 | 57 | 100 | 64 | 67 | 72 | 57 | 100 | 50 | 75 | 60 | 100 | 86 | 75 | 64 | |
Silicone oil | 68 | 71 | 65 | 76 | 56 | 80 | 74 | 62 | 80 | 100 | 80 | 75 | 92 | 62 | 67 | 76 | 75 | 75 | 100 | |
YWS | 56 | 60 | 90 | 66 | 95 | 71 | 63 | 95 | 52 | 100 | 70 | 64 | 96 | 100 | 95 | 66 | 86 | 64 | 100 |
AMD | Blur |
Coat | Mild DR | Moderate DR | PDR | Severe DR | Drusen | Glaucoma | HR | Maculopathy | PM | Normal | RAO | RP | RRD | BRVO | CRVO | Silicone Oil | YWS | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AMD | 1 | 0.95 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | |
Blur fundus | 1 | 0.86 | 1 | 0.78 | 0.6 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0.93 | 0.40 | 1 | 0.86 | 1 | 1 | |
Coat | 0.95 | 0.86 | 1 | 0.5 | 0 | 0.73 | 0.90 | 1 | 0.91 | 0 | 0.72 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0.81 | |
Mild DR | 1 | 1 | 1 | 0.95 | 1 | 0.95 | 1 | 0.95 | 1 | 1 | 1 | 0.88 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | |
Moderate DR | 1 | 0.78 | 0.5 | 0.95 | 0.95 | 0.93 | 1 | 1 | 1 | 1 | 1 | 0.94 | 1 | 1 | 0.74 | 0 | 0 | 1 | 0.95 | |
PDR | 1 | 0.6 | 0 | 1 | 0.95 | 0.79 | 1 | 1 | 1 | 1 | 0.83 | 0.94 | 1 | 1 | 0.71 | 1 | 1 | 1 | 1 | |
Severe DR | 1 | 1 | 0.73 | 0.95 | 0.93 | 0.79 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | |
Drusen | 1 | 1 | 0.90 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | |
Glaucoma | 1 | 1 | 1 | 0.95 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0.67 | 0 | |
HR | 1 | 1 | 0.91 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | |
Maculopathy | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
PM | 1 | 1 | 0.72 | 1 | 1 | 0.83 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0.89 | 1 | 1 | 1 | 1 | |
Normal | 1 | 1 | 1 | 0.88 | 0.94 | 0.94 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0.88 | 0.83 | 0 | 1 | 1 | |
RAO | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0.30 | 1 | 1 | |
RP | 1 | 0.93 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0.83 | 0 | 1 | 0.92 | |
RRD | 1 | 0.40 | 0 | 1 | 0.74 | 0.71 | 0 | 0 | 0 | 0 | 1 | 0.89 | 0.88 | 1 | 1 | 1 | 1 | 1 | 1 | |
BRVO | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0.83 | 0 | 0.83 | 1 | 0.72 | 1 | 0.89 | |
CRVO | 1 | 0.86 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0.30 | 0 | 1 | 0.72 | 1 | 1 | |
Silicone oil | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0.67 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
YWS | 1 | 1 | 0.81 | 1 | 0.95 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0.92 | 1 | 0.89 | 1 | 1 |
AMD | Blur Fundus | Coat | Mild DR | Moderate DR | PDR | Severe DR | Drusen | Glaucoma | HR | Maculopathy | PM | Normal | RAO | RP | RRD | BRVO | CRVO | Silicone Oil | YWS | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AMD | 1 | 0.95 | 1 | 1 | 0.96 | 1 | 0 | 1 | 0 | 1 | 0.94 | 1 | 0.6 | 1 | 1 | 1 | 0 | 0 | 0 | |
Blur fundus | 1 | 0.86 | 1 | 0.78 | 1 | 0.94 | 0 | 0.67 | 0 | 1 | 0.83 | 1 | 0 | 0.58 | 1 | 0.88 | 0.77 | 0 | 0 | |
Coat | 0.95 | 0.86 | 1 | 0.5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | |
Mild DR | 1 | 1 | 1 | 1 | 1 | 0.76 | 1 | 1 | 0 | 1 | 0 | 0.95 | 1 | 0 | 1 | 0.88 | 1 | 0 | 0 | |
Moderate DR | 1 | 0.78 | 0.5 | 1 | 0.97 | 0.65 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0.86 | 1 | 1 | 0 | 0.95 | |
PDR | 0.96 | 1 | 1 | 1 | 0.97 | 0.94 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0.56 | 0 | 0 | 0 | |
Severe DR | 1 | 0.94 | 1 | 0.76 | 0.65 | 0.94 | 1 | 0 | 1 | 0 | 0.83 | 0 | 0.9 | 1 | 1 | 0 | 1 | 1 | 1 | |
Drusen | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0.43 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.90 | |
Glaucoma | 1 | 0.67 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0.94 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
HR | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.43 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0.94 | 1 | 1 | 1 | |
Maculopathy | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0.83 | 0 | 0.83 | 0 | 0 | 0 | 0 | 0 | |
PM | 0.94 | 0.83 | 1 | 0 | 0 | 1 | 0.83 | 1 | 0.94 | 1 | 0 | 0 | 0 | 0.92 | 1 | 0.94 | 1 | 0 | 0 | |
Normal | 1 | 1 | 1 | 0.95 | 1 | 1 | 0 | 1 | 1 | 1 | 0.83 | 0 | 0 | 1 | 1 | 1 | 1 | 0.83 | 0.90 | |
RAO | 0.6 | 0 | 0 | 1 | 0 | 0 | 0.9 | 1 | 1 | 1 | 0 | 0 | 0 | 0.58 | 0.84 | 1 | 1 | 0 | 1 | |
RP | 1 | 0.58 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0.83 | 0.92 | 1 | 0.58 | 1 | 0.94 | 1 | 0 | 1 | |
RRD | 1 | 1 | 1 | 1 | 0.86 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0.84 | 1 | 0 | 1 | 0 | 0 | |
BRVO | 1 | 0.88 | 1 | 0.88 | 1 | 0.56 | 0 | 1 | 1 | 0.94 | 0 | 0.94 | 1 | 1 | 0.94 | 0 | 1 | 0 | 0.80 | |
CRVO | 0 | 0.77 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | |
Silicone oil | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0.83 | 0 | 0 | 0 | 0 | 0 | 1 | |
YWS | 0 | 0 | 1 | 0 | 0.95 | 0 | 1 | 0.90 | 1 | 1 | 0 | 0 | 0.90 | 1 | 1 | 0 | 0.80 | 0 | 1 |
Disease
Disease | Instances | Accuracy (%) | Sensitivity | Specificity |
---|---|---|---|---|
AMD | 41 | 100 | 1.0 | 1.0 |
Blur fundus | 45 | 100 | 1.0 | 1.0 |
Coat’s disease | 34 | 100 | 1.0 | 1.0 |
Mild DR | 57 | 91.8 | 0.88 | 0.95 |
Moderate DR | 84 | 97.8 | 0.94 | 1.0 |
PDR | 72 | 97.6 | 0.94 | 1.0 |
Severe DR | 52 | 51 | 1.0 | 0.0 |
Drusen | 30 | 100 | 1.0 | 1.0 |
Glaucoma | 28 | 67 | 0.0 | 1.0 |
HR | 22 | 72 | 0.0 | 1.0 |
Maculopathy | 74 | 92.8 | 1 | 0.83 |
PM | 54 | 50 | 1.0 | 0.0 |
RAO | 32 | 64 | 1.0 | 0.0 |
RP | 38 | 100 | 1.0 | 1.0 |
RRD | 57 | 94.5 | 0.88 | 1.0 |
RVO-BRVO | 54 | 91.7 | 0.83 | 1.0 |
RVO-CRVO | 54 | 50 | 0.0 | 1.0 |
Silicon oil | 19 | 91.6 | 1.0 | 0.83 |
Yellow white spots | 30 | 96.4 | 1.0 | 0.90 |
The second dimension of the study is concerned with multi-categories classification and analysis of its results. The total number of classes considered here is 16 (15 disease classes and one normal control class). The diseases consist of AMD, blur fundus, coat’s disease, DR, Drusen, glaucoma, HR, maculopathy, PM, RAO, RP, RRD, RVO, silicon oil in eye and yellow-white spots. It is necessary to mention here that sub-classes/stages of various diseases have been collectively taken as a single class. The proposed model has been tested with original as well as preprocessed images. It is observed that classification without preprocessing turned out to be unsatisfactory. Improvement in classification results can be witnessed with an enhancement of original images. The presented accuracy is 93.3% with preprocessing and 43.5% without preprocessing. The achieved sensitivity and specificity with preprocessing are 0.92 and 0.93 respectively. The results are presented in
Preprocessing Steps | Accuracy (%) | Sensitivity | Specificity |
---|---|---|---|
Without preprocessing (3 channels) | 43.5 | 0.27 | 0.43 |
Green channel | 62.3 | 0.62 | 0.62 |
Green channel + Sharpness | 76 | 0.78 | 0.78 |
Green channel + Sharpness + Contrast adjustment | 81.4 | 0.85 | 0.81 |
The third dimension of this study is the classification of fundus images by treating all stages of various diseases as separate classes. Here, altogether 20 classes are considered (19 disease classes along with their stages as separate classes and one normal control class). The diseases with substages/classes include DR (mild, moderate, severe and proliferative) and RVO (BRVO and CRVO). The results have very slightly dropped due to the addition of sub-stages of diseases as compared to previous results with 16 classes. The results including accuracy, sensitivity and specificity are presented in
Preprocessing steps | Accuracy |
Sensitivity | Specificity |
---|---|---|---|
Without preprocessing (3 channels) | 39.2 | 0.39 | 0.38 |
Green channel | 64.4 | 0.64 | 0.64 |
Green channel + Sharpness | 72.3 | 0.72 | 0.73 |
Green channel + Sharpness + Contrast adjustment | 85.1 | 0.88 | 0.85 |
It is observed that most of the past works are lacking in either of 3 things: (i) the dataset size is small (ii) research is conducted for binary classification only and/or (iii) an engineered set of features is used. This study copes with these lacking by bringing a comparatively large dataset into play and introducing an automated approach for the identification of 19 distinct retinal diseases using CNN which does end-to-end classification and drops the step of hand-crafted feature extraction and selection. The preprocessing operations applied in this study comprise green channel selection, image sharpening, contrast adjustment, and CLAHE to enhance the image features required by the model. It has been observed that with the proposed preprocessing steps, the outcomes of the model have improved.
A lot of research has been done on the classification of various retinal diseases using traditional machine learning algorithms [
Binary classification of diabetic retinopathy (healthy
It is tricky to tackle multiple classes for classification. Some key points must be considered for classifying data consisting of a large number of categories [
There is a lot of valuable work by a significant number of researchers on the classification of retinal diseases using a deep learning approach using different retinal imaging modalities. Some of the researchers constructed their deep learning models, while others utilized pre-trained models like VGG Net, AlexNet and Inception V3, etc. Karri presented a paper on classification using the pre-trained model GoogleNet with three classes i.e., Diabetic macular edema, dry AMD and normal controls using OCT imaging modality with an accuracy of 94% [
Most of the researchers worked for binary classification with less focus on multi-category classification of retinal diseases. A few studies exist that cover multiple retinal diseases or the classification of multiple stages of a single disease but the results are not much satisfactory. J. Y. Choi presented a pilot study employing a small database to classify retinal images consisting of ten classes [
The prediction of fundus AMD with 13 classes (nine AREDS stages, three late AMD stages and one for unlabeled images) is conducted by extracting features using CNN and performing classification using an ensemble of random forest [
Another study is conducted for the identification of five retinal diseases using a dataset of 157 instances of STARE database. The dataset is preprocessed with an upgraded CLAHE filter and data augmentation is applied. The study achieved 100% results [
Regardless of the high accuracies as reported in
Studies | Dataset | Number of classes | Number of instances | Preprocessing | Feature extraction | Classification | Accuracy |
---|---|---|---|---|---|---|---|
[ |
AREDS KORA | 13 | 5555 | Illumination correction, color balancing, resizing | CNN | Ensemble (random forest) | 63.30 |
[ |
UK biobank | 2 | 100 | Data augmentation, morphological thinning, segmentation | CNN | Softmax | 86.97 |
[ |
Eye PACS | 2 | 75137 | Resizing (512 × 512), brightness adjustment | CNN | Decision tree | 97 |
[ |
Subset of Kaggle (DR) | 2 | 1000 | Data augmentation | CNN | Softmax | 94.5 |
[ |
ARIA | 3 | 143 | Noise removal, contrast adjustment, histogram equalization | DBN, GRNN | Multi-class SVM | 96.73 |
[ |
STARE | 10 | 279 | Data augmentation, |
VGG-19 | Random forest | 30.5 |
[ |
STARE | 6 | 157 | CLAHE, Data augmentation | ResNet-50 | 100 | |
A comprehensive comparison of state-of-the-art research with our proposed approach is presented in
This paper has proposed a deep learning model for the automated identification of multiple retinal diseases using fundus imaging modality. Six layers of the convolutional neural network have been used for feature extraction and a softmax layer for classification after pursuing some image processing techniques as preprocessing steps. The paper comes up with auspicious results. For binary classification, we achieved up to 100% accuracy. With 16 classes the obtained accuracy is 93.3%, whereas, for 20 classes, it turned out to be 92.4%. The research concludes that as the number of classes increases, results deteriorate. Compared to existing studies, it has been observed that proposed preprocessing steps along with the CNN, result in promising outcomes. In the future, it is planned to solve the imbalanced classes issue using the data augmentation technique and to incorporate Principal Component Analysis (PCA) to speed up computations. Moreover, the proposed model is planned to be implemented in the clinical environment to assist ophthalmologists.
The authors received no specific funding for this study.
The authors declare that they have no conflicts of interest to report regarding the present study.