[BACK]
Computers, Materials & Continua0
DOI:10.32604/cmc.2022.020865
images
Article

IoMT Enabled Melanoma Detection Using Improved Region Growing Lesion Boundary Extraction

Tanzila Saba1, Rabia Javed2,3, Mohd Shafry Mohd Rahim2, Amjad Rehman1,* and Saeed Ali Bahaj4

1Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
2School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai, 81310, Johor Bahru, Malaysia
3Department of Computer Science, Lahore College for Women University, Lahore, 54000, Pakistan
4MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
*Corresponding Author: Amjad Rehman. Email: rkamjad@gmail.com
Received: 11 June 2021; Accepted: 25 October 2021

Abstract: The Internet of Medical Things (IoMT) and cloud-based healthcare applications, services are beneficial for better decision-making in recent years. Melanoma is a deadly cancer with a higher mortality rate than other skin cancer types such as basal cell, squamous cell, and Merkel cell. However, detection and treatment at an early stage can result in a higher chance of survival. The classical methods of detection are expensive and labor-intensive. Also, they rely on a trained practitioner's level, and the availability of the needed equipment is essential for the early detection of Melanoma. The current improvement in computer-aided systems is providing very encouraging results in terms of precision and effectiveness. In this article, we propose an improved region growing technique for efficient extraction of the lesion boundary. This analysis and detection of Melanoma are helpful for the expert dermatologist. The CNN features are extracted using the pre-trained VGG-19 deep learning model. In the end, the selected features are classified by SVM. The proposed technique is gauged on openly accessible two datasets ISIC 2017 and PH2. For the evaluation of our proposed framework, qualitative and quantitative experiments are performed. The suggested segmentation method has provided encouraging statistical results of Jaccard index 0.94, accuracy 95.7% on ISIC 2017, and Jaccard index 0.91, accuracy 93.3% on the PH2 dataset. These results are notably better than the results of prevalent methods available on the same datasets. The machine learning SVM classifier executes significantly well on the suggested feature vector, and the comparative analysis is carried out with existing methods in terms of accuracy. The proposed method detects and classifies melanoma far better than other methods. Besides, our framework gained promising results in both segmentation and classification phases.

Keywords: Deep features extraction; lesion segmentation; melanoma detection; SVM; VGG-19; healthcare; IoMT; public health

1  Introduction

The human body has skin as a fast-developing organ. A layer of protection for humans protects all life in this world. It is the skin that envelops the human body and protects it from harm. Among the various kinds of cancer, skin cancer is significant cancer that affects light-skin tone people. Melanoma is a riskier form of skin cancer. Usually, it is affecting young people from the age of 15 to 29 [1]. According to the American Cancer Society (ACS) [2], in 2020, the expected death cases will be 6,850 out of 100,350, of which 4,610 are male and 2,240 are female. The disease's critical nature has forced the need to better discover skin cancers within a suitable time, removing the need for biopsy and better and reliable diagnostic results.

Skin cancer has been categorized into two groups: non-melanoma, and melanoma. The second category of skin cancer is selected in this research because it is deadly among the other two. In addition, it can quickly spread from one body part to another. However, persistence rates are higher if it is identified and diagnosed timely. There are two approaches for melanoma detection, the first is the clinical diagnosis-based approach, and the second is the computer-aided diagnosis system.

The clinical diagnosis-based approaches are categorized into two methods: 1) Biopsy and 2) Dermatologist analysis. The biopsy is an invasive method, and on the other hand, the dermatologist analysis process is non-invasive, but their accuracy is nearly 75% [3], which is not promising. In the biopsy method, the skin tissue sample is taken for a laboratory test. This melanoma detection test takes time, and it is painful. The patient can tolerate this one-time pain, but if melanoma cannot detect a timely or early stage, it becomes dangerous or sometimes the reason for patient death.

For melanoma detection, the dermatologists follow these methods: ABCDE, Seven, or three checkpoint lists. However, in some cases, where dermoscopic image contrast quality is low and lesion or healthy skin cannot differentiate properly, these methods failed to work accurately. Moreover, the sometimes delay in getting the time for a dermatologist and limited availability of expert dermatologists is also the reason for late melanoma detection [4]. Due to these causes, the need for computer-aided systems is very demanding and necessary to overcome the death rate of melanoma patients.

Skin cancer melanoma detection and identification is a very critical task for dermatologists. As a result, many computer-aided systems are developed for automatic melanoma detection and classification that facilitate dermatologists and patients. There are four significant steps for creating a computerized system: 1) Pre-processing, 2) Lesion segmentation, 3) Feature extraction and selection, and 4) Classification.

In medical image processing, the segmentation of skin lesion extraction is necessary for the primary analysis of melanoma from dermoscopic images [5]. Analysts for skin cancer melanoma detection have proposed several computer-aided systems [6]. A variety of approaches such as threshold-based [7,8], region-based, edge-based, saliency map [9,10], convolutional neural network (CNN) and deep learning approaches are already implemented to solve the task of lesion segmentation. Threshold-based approaches are mainly used as one of the simplest methods [11,12], which works well on high contrast images.

Region-based methods are also beneficial for where lesions and skin color are heterogeneous [13]. The seed-based approach is used in region-based techniques that combine the regions as per normal image information and comparative data of the adjacent pixels. The region-based approaches include region growing, J-Image Segmentation (JSEG) watershed [14], and Statistical Region Merging (SRM) [15]. The edge-based segmentation approaches principally utilized the edge's knowledge of input image and estimated the lesion boundary, and the post-processing techniques are also applied [16].

The efficient feature vector is helpful for classifiers to categorize the objects accurately. Different features like hand-crafted, pattern, histogram-based, and deep learning features are utilized for skin lesion classification. The hand-crafted features can now detect melanoma accurately because the skin lesion datasets contain various image varieties and different artifacts [17]. Deep learning is machine learning's new and robust field, which helps learn a higher level of data abstraction. Through several different algorithms, deep learning can be implemented. Deep learning is used to compute the hierarchical and higher-level features that cannot be efficiently extracted from traditional machine learning techniques [18,19].

The detection of malignant melanoma from benign melanoma is also very tough for the computer-aided system [20]. Fig. 1 shows two images’ columns; one is benign, meaning non-cancerous, and the next is malignant cancerous. Both side's images are similar. The human eyes are unable to identify which lesion is melanoma or benign [21]. Even the dermatologist suggests the patient for biopsy when they are not confirmed about the lesion. The biopsy method is itself painful and time taking. Moreover, challenging images include gel/bubbles, marker-ink, color-chart, ruler-marks, dark-corner, skin-hairs, and the last most crucial challenge, the hottest challenge, is low contrast images [22,23].

images

Figure 1: Benign and malignant melanoma sample images

Melanoma is a fatal form of skin cancer that must be diagnosed early for effective treatment [24,25]. Melanoma affects a patient life even it can become a reason for death if its diagnosis is not accomplished on time. A rough pigment network and some suspicious signs do not help diagnose melanoma from dermoscopic images [26,27]. Hence, it is essential to develop an efficient and accurate method for analyzing skin lesions in big datasets, extracting the lesion boundary, and classifying lesions into ‘Benign’ and ‘Melanoma’ [28,29]. Therefore, from our proposed methods, the dermatologists and the patients will get the benefit. Even patients can avoid the painful biopsy test and save their money on other tests.

The main contributions of this research article are:

1)   We enhanced the low contrast lesion area's quality by implementing the statistical novel intensity histogram method to calculate the positive and negative slope.

2)   Proposed an improved region growing approach to detect the boundary of skin lesion dermoscopic images using convolutional filter and morphological operations.

3)   The dataset PH2 has the minimum images among all classes, so data augmentation is applied to balance the levels.

4)   The-trained deep visual geometry group (VGG-19) model to was suggested to extract features from segmented skin lesion images. Before the extraction of convolutional neural network (CNN) features, transfer learning is performed on the VGG-19 pre-trained model.

The rest of this article is prepared as follows. In Section 2, we define the proposed methodology in detail. Section 3 evaluates and analyzes the proposed method. We summarized our fundamental research based on experimental results in Section 4. Finally, In Section 5 we conclude our work.

2  Proposed Methodology

The proposed methodology has four phases, as defined below in Fig. 2. Phase 1: Pre-Processing, Phase 2: Proposed Segmentation, Phase 3: CNN Features Extraction, and Phase 4: Machine Learning Classification of skin lesion into two classes, benign and melanoma. A detailed explanation of each phase is given below.

images

Figure 2: The proposed methodology for melanoma detection

2.1 Pre-Processing

Pre-processing is the mandatory step for every computer-aided system [30]. The pre-processing performs the essential task of improving image quality and eliminating unnecessary objects from an image. Different algorithms like histogram-based methods, morphological-based methods, and soft-computing-based methods are used to enhance the quality of low contrast images [31].

The acquisition of diverse dermoscopic images from two different datasets does not appropriate for the proposed segmentation process. Therefore, before process the image into the segmentation phase, it is mandatory to pre-process the input image. Therefore, the pre-processing phase is essential to obtain high accuracy in the subsequent phases, especially segmentation. In addition, the ISIC dataset contains a variety of dermoscopic images in which low contrast images are challenging to handle.

Here, the contrast enhancement technique is implemented by an intensity histogram. In this technique, the dominant level (local maxima) is calculated, then contrast is equally distributed to the dermoscopic image. First, the histogram of every image is created to estimate the local minima of the histogram slope. The slopes (positive and negative) are graphically described in Fig. 3. The positive slope is the high points of x and y. The negative slope is the decreased points of x and y.

images

Figure 3: Pre-processing for local minima calculation. (a) The skin lesion image. (b) Intensity histogram. (c) Calculate local minima

The positive slope (PS) as defined in Eq. (2); negative slope (NS) as described in Eq. (2) are calculated. Here, delta y (Δy) and delta x (Δx) is the difference in slope ending value (y2, x2) and slope initial value (y1, x1). After evaluating the positive and negative slope values, the value of local minima is extracted by applying the condition where x (intensity levels) and y (number of pixels with each intensity level) are greater than 0; then these values are under the positive slope, and local minima (LM) values are described in Eq. (3).

PositiveSlope=PS=ΔyΔx(1)

NegativeSlope=NS=ΔyΔx(2)

f(x)=LMiff(x,y)>0(3)

The results are flawless as shown in Fig. 4 before contrast enhancement (original image) and after contrast enhancement. In addition, the visibility of low contrast images is now transparent after performing the contrast enhancement technique.

images

Figure 4: Sample images before and after pre-processing on ISIC 2017 and PH2 datasets

2.2 Skin Lesion Segmentation

In phase 2, the pre-processed image is used for the segment of the lesion. The improved region-growing technique is applying for the extraction of the boundary of the lesion. In the suggested method, the preliminary seed mask is created using the region window size 50 * 50. Here, two conditions apply if the adjacent pixel has the same color value and is not a part of another cluster or region. First, this pixel value is added to the cluster or region. This process is continuing until then, change in the pixel value.

After this step, the convolution filter is applied to extract boundary of the lesion. Using this operation is to blur the image that is smooth, and no edge is observed. After that, a re-thresholding value is applied to it to refine the lesion's current boundary edges. Finally, the improved segmented binary image is achieved after performing the morphological dilation operation. The proposed lesion segmentation method steps are graphically demonstrated in Fig. 5. In the traditional region growing method, only completed the contouring by adding similar neighbor pixels. In our proposed improved region growing method, the convolution filter and re-threshold value are applied.

images

Figure 5: Detail process flow of the proposed segmentation method

The selection of pixels of an image is based on the condition as given in Eq. (4). Here, Pi defines the different regions, Pj is for the same region, and M is for the initial mask.

i,jnR(PiPj){Pi=MPj<M(4)

The detailed process flow algorithm cycle is explaining below:

images

2.3 Features Extraction and Selection

One of CNN's architectures is VGG19-Net, which is, among others, is used extensively, and has good documentation [32]. This Convolution Net has been favored over others due to the excellent performance in the ImageNet dataset. The two publicly available variations are 16 and 19 weight layers due to their superior outcome over other variations [33].

In this article, the selection for the VGG-19 architecture explains in Fig. 6 because it generalizes better to other datasets. The network input layer is required that a 224 × 224-pixel RGB image is inputted. The input image goes through 5 convolutional blocks. The 3 × 3 receptive field of small convolutional filters is used. A 2D convolution layer operation (the quantity of filters changes among blocks) is contained in each convolutional block. The ReLU is included in each hidden layer as the activation function layer (nonlinearity operation), and it has spatial pooling through the feeding of a max-pooling layer.

images

Figure 6: Deep feature extraction and entropy-based feature selection. Given i pairs of 224 × 224 segmented images as the input of the feature extraction module. After extracting FV × 4096 features, the entropy-based feature selection is performed, and 2000 features are selected

The network ends with a classifier block containing three FC layers. The 70% to 30% (70:30) ratio is selected between training and testing data. To train the network, the number of images is available to datasets is insufficient. So, to overcome this problem, the transfer learning technique is employed on the VGG-19 pre-trained model. Also, the data augmentation is applied for the class melanoma that having a smaller number of images. The cancer type ‘Malignant Melanoma’ has the minimum images, so the data augmentation is applied to balance the classes.

After that, 4096 features are extracted from the 7th fully connected layer of the neural network. Then apply the feature selection entropy method is utilized. Finally, the 2000 features are selected to classify the skin lesion images.

2.4 Lesion Classification

The classification is performed in 2000, selected ‘Deep CNN’ features. Two publicly online available datasets, ISIC 2017 and PH2, are utilized for analysis. The machine learning classifier SVM is being used to categorize these two classes: Melanoma and benign.

3  Experiments and Results

3.1 Experimental Setup and Datasets

The proposed framework is implemented using MATLAB 2019a on windows 10 home edition. The hardware used in this approach has the following specifications: 2.8 GHz Intel (R) Core (TM) i5-8250U CPU with the 64-bit operating system, 12.0 GB RAM, and NVIDIA GeForce 940MX graphic card. The proposed framework for lesion detection and melanoma recognition is assessed on two freely online available dermoscopic datasets, including ISIC 2017 and PH2 [34]. The ISIC 2017 dataset contains 2,600 dermoscopic images. The resolution of these images is 296 by 1456. The dataset has 2,109 non-melanoma images, and 491 are melanoma images shown in Tab. 1.

images

A total of 200 images are accessible in the PH2 dataset. The resolution of 8-bit images is 768 * 660. In PH2, there are two categories: 1) non-melanoma (benign, common nevi), and 2) Melanoma. The detailed dataset division is shown in Tab. 2. In addition, the images extracted by expert dermatologists, known as ground truth images, are also available for the validation of the segmentation phase.

images

3.2 Skin Lesion Segmentation Results

The segmentation results are analyzed by comparing the ground truth images and statistical performance measures used to assess the proposed segmentation method.

3.2.1 Qualitative Experiment

For the qualitative experiments, the ground truth images are used to compare the similarity between expert segmented images and our proposed method segmented images. First, some random images are selected from the ISIC 2017 dataset and represent these images graphically, as shown in Fig. 7. In column (a), the original images from dataset ISIC 2017 are listed. The second column (b) described the Pre-processing step in which the contrast enhancement method is applied. In the third column (c), the binary segmented images are presented. The segmented images are shown in the fourth column (d) after applying the proposed segmentation method. The last column (e) demonstrated the comparison between the segmented images with ground truth images. The green boundary describes the ground truth image, and the blue boundary defines the segmented image.

images

Figure 7: (a) Original image. (b) Pre-process image. (c) Binary segmented image. (d) Proposed segmented RGB image. (e) Comparison with ground truth image

Here, it can be observed that the segmented images achieved from the proposed segmentation method are much nearer to the ground truth images. Also, for the PH2 dataset, roughly images are chosen, and the visual presentation of these images is shown in Fig. 8. Like the ISIC 2017 dataset, the steps from (a) to (e) are also performed for the PH2 dataset. In the end column (e), the comparison between our method with ground truth images shows that the proposed segmentation method similarly works well on the PH2 dataset.

images

Figure 8: (a) Original image. (b) Pre-process image. (c) Binary segmented image. (d) Proposed segmented RGB image. (e) Comparison

3.2.2 Quantitative Experiment

The accuracy (AC), Jaccard index (JI), dice index (DI), precision (Prec), and recall (Rec) these performance measures are used for the evaluation of the segmentation phase as presented in equations from Eqs. (5) to (9). Here, in equations, the TP is the true positive rate, TN is the true negative rate, FP is the false positive rate, and FN is the false-negative rate.

AC=TP+TNTP+TN+FP+FN(5)

JI=TPTP+FP+FN(6)

DI=2TP(2TP)+FP+FN(7)

Prec=TPTP+FP(8)

Rec=TPTP+FN(9)

The experiment is performed on the ISIC 2017 dataset, which contains a total of 2000 images. In Tab. 3, the top 10 images result are given. The overall average accuracy achieved is 95.74%.

images

Correspondingly, the PH2 dataset detailed segmentation results are also specified in Tab. 4. The experiment is performed on a total of 200 images of the dataset PH2. However, the results are presented in a few images. The overall average accuracy is 95.41%.

images

3.3 Skin Lesion Classification Results

After analyzing the segmentation results, the classification performance is evaluated using statistical performance measures as Precision, Sensitivity, Specificity, and Accuracy. The machine learning classifiers Fine-Tree, Cubic-SVM, and Fine-KNN, are selected, which gave the best accuracies. For example, it can be seen in Tab. 5; the selected features showed the accuracy of 95.1% by Fine-Tree classifier, 95.9% on Fine-KNN classifier, and 96.9% on Cubic-SVM classifier using dataset ISIC 2017. Equations in display format are separated from the paragraphs of the text.

images

The performance of Cubic-SVM is also verified by the confusion matrix, as presented in Fig. 9.

images

Figure 9: Confusion matrix of Cubic-SVM classification results on dataset ISIC 2017

The machine learning classifiers were also applied to the PH2 dataset. As can see in Tab. 6. The Cubic-SVM gave efficient accuracy, which is 96.4%.

images

The performance of Cubic-SVM can be validated from the confusion matrix, as illustrated in Fig. 10.

images

Figure 10: Confusion matrix of Cubic-SVM classification results on dataset ISIC 2017

The diagonal part in green color in the confusion matrix shows that the lesion true class percentage is 94% benign and 99% melanoma in Fig. 9 and 96% atypical nevus, 93% common nevus, and 100% melanoma in Fig. 10. The pink class color defined the false that is not classified accurately.

4  Discussion

The proposed framework for detecting skin cancer melanoma has four phases, as shown in Fig. 2. In the first phase, contrast enhancement was performed on all datasets images to increase the contrast of skin lesion images. In the second phase, the proposed segmentation algorithm is explained step by step. In the third phase, the deep features are extracted to classify the lesion into benign and melanoma. In the last phase, the classification is performed by utilized the machine learning classifiers. In Section 3, the segmentation results are presented in terms of graphically and tabular form. The two free online available datasets are used for lesion segmentation and classification.

The proposed segmentation method is compared with the existing techniques using the datasets ISIC 2017 and PH2, as defined in Tab. 7. Bi et al., [35] utilized the hand-crafted features and joint reverse classification for lesion categorization. As a result, they achieved 92.0% accuracy on the PH2 dataset. Gutiérrez-Arriola et al. [36] presented a method based on pre-processing, and through ISIC 2017, it gained 91.0% accuracy. Navarro et al., [37] implemented the super pixel-based segmentation method. On the dataset ISIC 2017, their approach reached 85.4% accuracy. The deep learning pre-trained architecture VGG16 and ResNet are used in [38], and through the deep features, the achieved accuracy is 93.8%. Moreover, the deep learning architecture Inception is used in [39] their fused features gained 94.7% accuracy. Thus, the proposed method attained the accuracies of 95.0% on ISIC 2017 and 93.0% on the PH2 dataset.

images

Qualitative and several quantitative methods assess lesion segmentation and classification performance. Our proposed framework achieved specificity as 0.963, sensitivity as 0.964, accuracy (AC) as 0.96. Besides, for segmentation, the average dice index (DI) results were verified as 0.98, which signified segmentation efficient performance. Furthermore, the comparative analysis by the state-of-art approaches and the outcomes of experiments have revealed the proposed framework's dominance.

However, in some cases, the proposed segmentation method fails to extract the lesion from healthy skin. Some of the failure case images from ISIC 2017 are shown in Fig. 11. These cases are because the color of the lesion is very close to skin tone color.

images

Figure 11: The proposed method failed to extract the lesion accurately because of normal skin's visual comparability

5  Conclusion

Medical image processing has suggested many solutions to support dermatologists in extracting skin lesion boundaries and classification. In this article, an improved region growing method is implemented to segmentation skin lesions from dermoscopic images. The deep learning VGG-19 model is implemented for the extraction of high-level features. Moreover, entropy selection is performed for the selection of unique features. The selected features are future consumed for classification by SVM. The proposed technique is demonstrated on two freely available datasets, ISIC 2017 and PH2. It is determined that the machine learning SVM classifier performs considerably well with proposed deep features. Different kinds of skin lesion images or pictures may be taken from a mobile or the internet in the future. Finally, these images will be utilized for the segmentation and classification of skin lesions. Moreover, for classification, other training classifiers can also be explored for skin lesion categorization.

Acknowledgement: This work is also supported by Artificial Intelligence and Data Analytics (AIDA) Lab CCIS Prince Sultan University Riyadh, Saudi Arabia and authors would also like to acknowledge the support of Prince Sultan University for paying the Article Processing Charges (APC) of this publication. Also, this work is supported by the School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia. Moreover, the first author is also grateful for the support of the Department of Computer Science, Lahore College for Women University, Jail Road, Lahore 54000, Pakistan.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. C. Facts, “Cancer facts & figures 2019 WA,” American Cancer Society, vol. 1, no. 1, pp. 1–76, 2019.
  2. A. C. Society, “What is melanoma skin cancer?,” American Cancer Society, vol. 1, no. 1, pp. 1–5, 2019.
  3. A. Rehman, M. A. Khan, Z. Mehmood, T. Saba, M. Sardaraz et al., “Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction,” Microscopy Research and Technique, vol. 83, no. 4, pp. 410–423, 2020.
  4. T. Sadad, A. Rehman, A. Hussain, A. A. Abbasi et al., “A review on multi-organs cancer detection using advanced machine learning techniques,” Current Medical Imaging, vol. 17, no. 6, pp. 686–694, 2021.
  5. T. Saba, A. Rehman and G. Sulong, “An intelligent approach to image denoising,” Journal of Theoretical and Applied Information Technology, vol. 17, no. 2, pp. 32–36, 2010.
  6. T. Saba, A. Rehman, Z. Mehmood, H. Kolivand and M. Sharif, “Image enhancement and segmentation techniques for detection of knee joint diseases: A survey,” Current Medical Imaging, vol. 14, no. 5, pp. 704–715, 2018.
  7. T. Saba, “Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges,” Journal of Infection and Public Health, vol. 13, no. 9, pp. 1274–1289, 2020.
  8. N. Abbas, T. Saba, D. Mohamad, A. Rehman, A. S. Almazyad et al., “Machine aided malaria parasitemia detection in giemsa-stained thin blood smears,” Neural Computing and Applications, vol. 29, no. 3, pp. 803–818, 201
  9. F. Afza, M. A. Khan, M. Sharif, T. Saba, A. Rehman et al., “Skin lesion classification: An optimized framework of optimal color features selection,” in 2nd Int. Conf. on Computer and Information Sciences (ICCIS), IEEE, Sakaka, Saudi Arabia, pp. 1–6, 2020.
  10. M. A. Khan, M. Y. Javed, M. Sharif, T. Saba and A. Rehman, “Multi-model deep neural network based features extraction and optimal selection approach for skin lesion classification,” in Int. Conf. on Computer and Information Sciences (ICCIS), IEEE, Sakaka, Saudi Arabia, pp. 1–7, 2019.
  11. H. Fan, F. Xie, Y. Li, Z. Jiang and J. Liu, “Automatic segmentation of dermoscopy images using saliency combined with otsu threshold,” Computers in Biology & Medicine, vol. 85, pp. 75–85, 2017.
  12. A. Mittal, D. Kumar, M. Mittal, T. Saba, I. Abunadi et al., “Detecting pneumonia using convolutions and dynamic capsule routing for chest X-ray images,” Sensors, vol. 20, no. 4, pp. 1068–1098, 2020.
  13. M. A. Khan, M. Sharif, T. Akram, M. Raza, M. Saba, T. et al., “Hand-crafted and deep convolutional neural network features fusion and selection strategy: An application to intelligent human action recognition,” Applied Soft Computing, vol. 87, pp. 1–14, 2020.
  14. M. Nasir, M. A. Khan, M. Sharif, M. Y. Javed, T. Saba et al., “Melanoma detection and classification using computerized analysis of dermoscopic systems: A review,” Current Medical Imaging, vol. 16, no. 7, pp. 794–822, 2020.
  15. R. Javed, M. Shafry, T. Saba, S. M. Fati, A. Rehman et al., “Statistical histogram decision based contrast categorization of skin lesion datasets dermoscopic images,” Computers, Materials & Continua, vol. 67, no. 2, pp. 2337–2352, 2021.
  16. M. A. Khan, S. Kadry, Y. D. Zhang, T. Akram, T. Sharif et al., “Prediction of COVID-19-pneumonia based on selected deep features and one class kernel extreme learning machine,” Computers & Electrical Engineering, vol. 90, pp. 106960, 2021.
  17. A. Norouzi, M. S. M. Rahim, A. Altameem, T. Saba, A. E. Rad et al., “Medical image segmentation methods, algorithms, and applications,” IETE Technical Review, vol. 31, no. 3, pp. 199–213, 2014.
  18. M. M. Adnan, M. S. M. Rahim, A. Rehman, Z. Mehmood, T. Saba et al., “Automatic image annotation based on deep learning models: A systematic review and future challenges,” IEEE Access, vol. 9, pp. 50253–50264, 2021.
  19. S. L. Marie-Sainte, L. Aburahmah, R. Almohaini and T. Saba, “Current techniques for diabetes prediction: Review and case study,” Applied Sciences, vol. 9, no. 21, pp. 4604–4612, 20
  20. R. Javed, M. Shafry, M. Rahim, T. Saba and A. Rehman, “A comparative study of features selection for skin lesion detection from dermoscopic images,” Network Modeling Analysis in Health Informatics & Bioinformatics, vol. 9, no. 1, pp. 1–13, 20
  21. Z. Al-Ameen, G. Sulong, A. Rehman, A. Al-Dhelaan, T. Saba et al., “An innovative technique for contrast enhancement of computed tomography images using normalized gamma-corrected contrast-limited adaptive histogram equalization,” EURASIP Journal on Advances in Signal Processing, vol. 2015, no. 1, pp. 1–12, 2015.
  22. J. Kawahara, A. Bentaieb and G. Hamarneh, “Deep features to classify skin lesions,” in Proc. Int. Symp. on Biomedical Imaging, IEEE, Prague, Czech Republic, vol. 4, pp. 1397–1400, 2016.
  23. D. A. Shoieb, S. M. Youssef and W. M. Aly, “Computer-aided model for skin diagnosis using deep learning,” Journal of Image & Graphics, vol. 4, no. 2, pp. 122–129, 2016.
  24. T. Saba, S. Al-Zahrani, A. Rehman, “Expert system for offline clinical guidelines and treatment,” Life Sci. Journal, vol. 9, no. 4, pp. 2639–2658, 2012.
  25. F. Afza, M. A. Khan, M. Sharif, A. Rehman, “Microscopic skin laceration segmentation and classification: A framework of statistical normal distribution and optimal feature selection,” Microscopy Research and Technique, vol. 82, no. 9, pp. 1471–1488, 2019.
  26. A. Husham, M. H. Alkawaz, T. Saba, A. Rehman, J. S. Alghamdi, “Automated nuclei segmentation of malignant using level sets,” Microscopy Research and Technique, vol. 79, no. 10, pp. 993–997, 2016.
  27. R. Javed, T. Saba, M. Shafry and M. Rahim, “An intelligent saliency segmentation technique and classification of low contrast skin lesion dermoscopic images based on histogram decision,” in Proc. Int. Developments in eSystems Engineering DeSE, IEEE, Kazan, Russia, pp. 164–169, 2019.
  28. R. Javed, M. S. M. Rahim and T. Saba, “An improved framework by mapping salient features for skin lesion detection and classification using the optimized hybrid features,” International Journal of Advanced Trends in Computer Science & Engineering, vol. 8, pp. 95–101, 2019.
  29. U. Ullah, T. Saba, N. Islam, N. Abbas and A. Rehman, “An ensemble classification of exudates in color fundus images using an evolutionary algorithm based optimal features selection,” Microscopy Research and Technique, vol. 82, no. 4, pp. 361–372, 2019.
  30. K. Yousaf, Z. Mehmood, T. Saba, A. Rehman, A. M. Munshi et al., “Mobile-health applications for the efficient delivery of health care facility to people with dementia (PwD) and support to their carers: A survey,” BioMed Research International, vol. 2019, pp. 1–26, 2019.
  31. M. A. Khan, T. Akram, M. Sharif, M. Awais, K. Javed et al., “CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features,” Computers and Electronics in Agriculture, vol. 155, pp. 220–236, 2018.
  32. K. Aurangzeb, I. Haider, M. A. Khan, T. Saba, K. Javed et al., “Human behavior analysis based on multi-types features fusion and von nauman entropy based features reduction,” Journal of Medical Imaging and Health Informatics, vol. 9, no. 4, pp. 662–669, 2019.
  33. F. Ramzan, M. U. G. Khan, A. Rehmat, S. Iqbal, T. Saba et al., “A deep learning approach for automated diagnosis and multi-class classification of Alzheimer's disease stages using resting-state fMRI and residual neural networks,” Journal of Medical Systems, vol. 44, no. 2, pp. 37–50, 2020.
  34. A. Rehman, H. Yar, N. Ayesha and T. Sadad, “Dermoscopy cancer detection and classification using geometric feature based on resource constraints device (Jetson nano),” in 2020 13th Int. Conf. on Developments in eSystems Engineering (DeSE), IEEE, Liverpool, United Kingdom, pp. 412–417, 2020.
  35. L. Bi, J. Kim, E. Ahn, D. Feng and M. Fulham, “Automatic melanoma detection via multi-scale lesion-biased representation and joint reverse classification,” in Proc. Int. Symp. on Biomedical Imaging, IEEE, Prague, Czech Republic, vol. 1, pp. 1055–1058, 2016.
  36. R. F. J. M. Gutiérrez-Arriola, M. Gómez-Álvarez, V. Osma-Ruiz and N. Sáenz-Lechón, “Skin lesion segmentation based on preprocessing, thresholding and neural networks image formats and preprocessing techniques,” Computer Vision and Pattern Recognition, vol. 1, pp. 2–5, 2017.
  37. F. Navarro, M. Escudero-viñolo and J. Bescós, “Accurate segmentation and registration of skin lesion images to evaluate lesion change,” IEEE Journal of Biomedical & Health Informatics Accurate, vol. 2194, no. 2, pp. 501–508, 2018.
  38. A. Soudani and W. Barhoumi, “An image-based segmentation recommender using crowdsourcing and transfer learning for skin lesion extraction,” Expert Systems with Applications, vol. 118, pp. 400–410, 2019.
  39. T. Saba, M. A. Khan, A. Rehman and S. L. Marie-Sainte, “Region extraction and classification of skin cancer: A heterogeneous framework of deep CNN features fusion and reduction,” Journal of Medical Systems, vol. 43, no. 9, pp. 1–29, 2019.
  40.  R. Javed, M. Shafry, M. Rahim, T. Saba and M. Rashid, “Region-based active contour JSEG fusion technique for skin lesion segmentation from dermoscopic images,” Biomedical Research, vol. 30, no. 6, pp. 1–10, 2019.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.