[BACK]
Intelligent Automation & Soft Computing
DOI:10.32604/iasc.2022.019117
images
Article

Deep Learning-Based Skin Lesion Diagnosis Model Using Dermoscopic Images

G. Reshma1,*, Chiai Al-Atroshi2, Vinay Kumar Nassa3, B.T. Geetha4, Gurram Sunitha5, Mohammad Gouse Galety6 and S. Neelakandan7

1Department of Information Technology, P. V. P. Siddhartha Institute of Technology, Vijayawada, 520007, India
2Department of Education Counselling, College of Basic Education University of Duhok, Duhok, 44001, Iraq
3Department of Computer Science & Engineering, South Point Group of Institutions, Sonipat, Haryana, 131001, India
4Department of ECE, Saveetha School of Engineering, SIMATS, Saveetha University, Tamil Nadu, 602105, India
5Department of CSE, Sree Vidyanikethan Engineering College, Tirupati, 517102, India
6Department of Information Technology, College of Engineering, Catholic University in Erbil, Kurdistan Region, 44001, Iraq
7Department of Information Technology, Jeppiaar Institute of Technology, 601201, India
*Corresponding Author: G. Reshma. Email: greshma@pvpsiddhartha.ac.in
Received: 02 April 2021; Accepted: 11 June 2021

Abstract: In recent years, intelligent automation in the healthcare sector becomes more familiar due to the integration of artificial intelligence (AI) techniques. Intelligent healthcare systems assist in making better decisions, which further enable the patient to provide improved medical services. At the same time, skin lesion is a deadly disease that affects people of all age groups. Skin lesion segmentation and classification play a vital part in the earlier and precise skin cancer diagnosis by intelligent systems. However, the automated diagnosis of skin lesions in dermoscopic images is challenging because of the problems such as artifacts (hair, gel bubble, ruler marker), blurry boundary, poor contrast, and variable sizes and shapes of the lesion images. This study develops intelligent multilevel thresholding with deep learning (IMLT-DL) based skin lesion segmentation and classification model using dermoscopic images to address these problems. Primarily, the presented IMLT-DL model incorporates the Top hat filtering and inpainting technique for the pre-processing of the dermoscopic images. In addition, the Mayfly Optimization (MFO) with multilevel Kapur’s thresholding-based segmentation process is involved in determining the infected regions. Besides, an Inception v3 based feature extractor is applied to derive a valuable set of feature vectors. Finally, the classification process is carried out using a gradient boosting tree (GBT) model. The presented model’s performance takes place against the International Skin Imaging Collaboration (ISIC) dataset, and the experimental outcomes are inspected in different evaluation measures. The resultant experimental values ensure that the proposed IMLT-DL model outperforms the existing methods by achieving higher accuracy of 0.992.

Keywords: Intelligent models; computer-aided diagnosis; skin lesion; artificial intelligence; deep learning

1  Introduction

Skin cancer is a generally occurring kind of cancer over the globe [1]. Melanoma, squamous cell carcinoma, basal cell carcinoma, intraepithelial carcinoma, etc., are different kinds of skin cancers [2]. The human skin comprises three tissues, namely hypodermis, epidermis, and dermis. The epidermis has melanocytes that could create melanin at a highly unusual rate in any situation. For example, a long-term acquaintance of stronger ultraviolet radiation from light can cause melanin creation. The unusual development of melanocytes can cause a lethal kind of skin cancer [3]. The American Cancer Society in 2019 reported that it is anticipated that there would be around 96,480 new cases of melanoma and 7230 persons will be dead from the disease [4,5]. Earlier diagnoses of melanoma are essential for better treatment. When the melanoma was identified in the earlier phases, the 5-year survival rate becomes 92% [6].

Nevertheless, the resemblances among the malign and benign skin lesions are the central problem of melanoma detection. Consequently, detecting melanoma finds complicated for skilled professionals. It is difficult to determine the lesion kind with the human eye.

In recent years, distinct imaging techniques were utilized to capture skin images. Dermoscopy is a non-invasive imaging method that enables the skin surface’s visual image by the immersion fluid and light magnification device [7,8]. However, the simple visualization for identifying melanoma in skin lesions might be subjective, irreproducible, or inaccurate because of knowledge that depends on the specialist. The predictive outcome of melanoma from the dermoscopic images by non-professionals lies in the range of 75%–84%. In order to resolve these problems that exist in the melanoma diagnosis process, Computer-Aided Diagnosis (CAD) methods are required for assisting the professionals with the analysis system. The processes involved in the CAD model for melanoma identification involve pre-processing, segmentation, feature extraction, and classification. To effectively identify a melanoma, lesion segmentation is a crucial phase in the CAD system, but it becomes difficult because of considerable variations in texture, size, color, and position of the skin lesions in dermoscopic images. Besides, additional features like hair, ebony frames, air bubbles, color illumination, ruler marks, and blood vessels cause additional challenges to the lesion segmentation. Several techniques were presented for the segmentation of skin lesions. In recent times, Convolutional Neural Network (CNN) is one of the deep learning (DL) techniques, which have attained effective outcomes in the CAD model [9]. Some of the DL architectures are AlexNet, MobileNet, ResNet, etc. In this study, the Inception model is employed due to the following reasons. The Inception model has low computation efficiency and fewer parameters are realized. Besides, it offers high-performance gain, effective utilization of computing resources with a slight increase in computation load for the high-performance output of an Inception network.

This study designs an Intelligent Multilevel Thresholding with Deep Learning (IMLT-DL) based skin lesion segmentation and classification model using dermoscopic images. Principally, the presented IMLT-DL model integrates the Top hat filtering and inpainting technique for the pre-processing of the dermoscopic images. Moreover, the Mayfly Optimization (MFO) with multilevel Kapur’s thresholding-based segmentation process is involved in determining the infected regions. Also, an Inception v3 based feature extractor is applied to generate a meaningful collection of feature vectors from the segmented image. Lastly, GBT model-based classification process is carried out to allocate proper class labels of the applied dermoscopic images. The proposed IMLT-DL model is simulated using International Skin Imaging Collaboration (ISIC) dataset and the experimental results are inspected under different evaluation measures. The paper’s organization is given as follows: Section 2 reviews the state-of-art skin lesion segmentation techniques. Section 3 explains the proposed IMLT-DL model and section 4 validates the simulation results. At last, the conclusion of the IMLT-DL model is drawn.

2  Literature Review

This section reviews some of the existing skin lesion segmentation and classification models. Jaisakthi et al. [10] summarized a semi-supervised technique by combining the Grabcut and K-means clustering methods for segmenting the skin lesions. First, the graph cuts are used to segment the melanoma, and then K-means clustering fine-tuned the boundary of the lesion. The pre-processing methods like noise removal and image normalization processes are utilized on the input image, earlier serving to the pixel classification process. Agrawal et al. [11] used the scale-invariant feature transform method for feature extraction. Madaan et al. [12] implemented convolutional neural networks for medical image classification. Similarly, Aljanabi et al. [13] presented an artificial bee colony (ABC) technique for segmenting lesions. The swarm-based method includes pre-processing of the digital image. Subsequently, it determines the optimal threshold value of the melanoma by that lesion is segmentation by Otsu thresholding.

Pennisi et al. [14] presented a method that segments the Delaunay triangulation method’s image (DTM). This technique includes a parallel segmentation method, which creates two different images that are later combined to obtain the last lesion mask. The artifacts are detached in the images and then one model filtering the skin from the image for providing a binary mask of lesions. The DTM method is automatic and does not need a trained model that can be quicker than other techniques. Bi et al. [15] presented a novel automatic technique that executes image segmentation by image-wise supervised learning (ISL) and multi-scale super pixel-based cellular automata (MSCA). The researchers utilized probability maps for automatic seed selection, which removes the user-defined seed selection; subsequently, the MSCA method is applied to segment the skin lesions. Bi et al. [16] presented a Fully Convolutional Network (FCN) based technique to segment the skin lesion. The image features are learned from embedding the multi-stage of the FCN and attained an enhanced segmentation accuracy (compared to earlier tasks) of skin lesion without applying all pre-processing portion (for example, contrast improvement, hair removal, and so on).

Yuan [17] presented a convolution deconvolution neural network (CDNN) for automating the method to segment skin lesions. This method has concentrated on trained approaches, making it highly effective against the utilization of several pre and post-processing. This method creates the probability mapping in which the components correspond to the possibility of pixels going to melanoma. Berseth [18] proposed a U-Net framework to segment the skin lesions depending upon probability mapping of the image dimension, whereas the 10-fold cross-validation system is utilized for training this method. Paulraj [19] introduced a DL method to extract the lesion parts from the skin lesion.

3  The Proposed Intelligent Skin Lesion Diagnosis Model

The system architecture of the presented IMLT-DT model is illustrated in Fig. 1. The figure has shown that the IMLT-DT model diagnoses the skin lesion using different stages of operations such as pre-processing, segmentation, feature extraction, and classification. The detailed working of each operation is offered in the succeeding subsections.

3.1 Image Pre-Processing

Initially, pre-processing of skin lesion images is performed in two stages, as defined below. Primarily, the format conversion and region of interest (RoI) detection processes are performed. As the existence of hair affects the detection and classification results, the hair removal process is carried out [20]. The RGB image is transformed into the grayscale image, and then the top hat filtering technique is utilized to identify the thick and dark hair in the dermoscopic images. The results obtained by the earlier processes comprise high variation among the input and output images, as given in Eq. (1) below:

Zw(G)=GbG (1)

where ○ signifies the closing function, G represents the grayscale input image and b designates the grayscale designing component. Lastly, in the painting process, the hairline pixels are replaced with the nearby pixel values.

images

Figure 1: Overall process of proposed method

3.2 MFO with Multilevel Thresholding-Based Segmentation

Once the dermoscopic input images are pre-processed, the MFO with multilevel Kapur’s thresholding-based segmentation model is performed to determine the infected lesion regions in the dermoscopic images. Kapur et al. [21] presented an effective thresholding technique to determine the optimal thresholds for image segmentation. It mainly depends upon the entropy and thus the probability distribution of the image histogram. This technique computes the optimal (th) for the maximization of the overall entropy. In the case of bi-level thresholding, the objective function of Kapur’s problem can be represented in Eq. (2):

Fkapur(th)=H1+H2 (2)

where H1 and H2 can be computed as

H1=i=1thPhiω0ln(Phiω0) (3)

H2=i=th+1LPhiω1ln(Phiω1) (4)

where Phi is the probability distribution of the intensity level, ω0 (th) and ω1 (th) are probability distribution of the class labels Hl and H2 as shown in Eqs. (3) and (4). This entropy-based technique can be extended for multilevel thresholding values. For example, it is essential to divide the images into k class labels using k1 threshold values [22]. The objective function can be altered using Eq. (5):

Fkapur(TH)=i=lkHi (5)

where TH= [thl,th_2,th_(k1)] is a vector comprising multiple threshold values. All the entropies are determined individually with the corresponding ( th ) value, so Eq. (6) is extended for k entropy. Fig. 2 demonstrates the flowchart of the MFO technique.

images

Figure 2: Architecture of inception V3 model

Hkc=i=thk+1LPhiωklln(Phiωk1) (6)

where the values of the probability occurrence (ω0c,ω1,,ωk1) of the k class levels are attained, for the optimal selection of multiple threshold values, the MFO algorithm is applied.

The MFO algorithm is stimulated by the flighting nature and mating process of the mayflies [23]. In the MFO algorithm, the individuals in swarms are particularly recognized as male and female mayflies. The male MFs are generally robust and results in improved optimization. The MFO algorithm update the position based on the existing positions pi(t) and velocity vi(t) at the present round:

pi(t+1)=pi(t)+vi(t+1) (7)

Every male and female MFs update the respective position using Eq. (7). However, the MFs involve distinct velocity updating characteristics.

3.2.1 Movement of Male MFs

Male MFs in the swarm perform exploration or exploitation processes over iterations. The velocity gets updated based on the present fitness values (ofxi) and the past optimal fitness value in trajectory f(x) . When f(xi)>f(xhi) , the male MFs update the velocity based on the current velocity along with the distance among them and gbest position, the past optimal trajectory:

vi(t+1)=g.vi(t)+a1eβrp2[xhixi(t)]+a2eβrq2,[xgxi(t)] (8)

where g is a variable reduced from maximum to 1 in a linear way. a1, a2 , and β are the constants. rp and rg are two parameters denoting the Cartesian distance among the individuals and its past optimal position, the gbest position in swarms. The Cartesian distance is the 2nd norm for the distance array:

||xixj||=k=1n(xikxjk)2 (9)

At the same time, when f(xi)<f(xhi) , the male MFs update the velocities from the presented one with a random dance coefficient d :

vi(t+1)=g.vi(t)+d.r1 (10)

where r1 is the arbitrary number in uniform distribution.

3.2.2 Movement of Female MFs

The female MFs update the velocity in various ways. The female MFs with wings only endure for 1–7 days. Therefore, the female MFs rushed to identify the MFs for mating and reproduction. So, the velocity gets updated depending upon the male MFs they wish to mate. Here, the topmost female and male MFs are considered the first mate and the next optimal female; male MFs are treated as the second mate, etc. Therefore, for the ith female mayfly, when f(yi)<f(xi) :

vi(t+1)=g.vi(t)+a3eβrmf2[xi(t)yi(t)] (11)

where a3 represents the constant employed for balancing the velocity and rm denotes the Cartesian distance among them. Contrastingly, when (yi)<f(xi) , the female MFs update the velocity from the existing one with another arbitrary dance fl :

vi(t)=g.vi(t)+fl.r2 (12)

where r2 is the arbitrary number in uniform distribution.

3.2.3 MFs Mating

The top half of the male and female MFs undergo mating and reproduce children. The offsprings are arbitrarily developed from the respective parents as defined below:

offspring1=Lmale+(1L)female (13)

offspring2=Lfemale+(1L)male (14)

where L is arbitrary numbers in Gauss distribution.

3.3 Feature Extraction

The segmented image is passed into the Inception v3 model during feature extraction, which has generated a meaningful set of feature vectors. Krizhevsky et al. [24] proposed the AlexNet model for object recognition and classification, and it has achieved improved performance. Followed by, different convolutional techniques are developed for the minimization of the Top-5 error rate of object recognition and classification. On comparing with the GoogleNet (Inception-v1) model, the Inception-v3 model has achieved improved performance. Notably, it has three parts: fundamental convolution block, enhanced Inception block, and classification block. Fig. 3 illustrates the structure of the Inception V3 model.

The fundamental convolution block, which alternates the convolution with max-pooling layers, is employed to extract the features. Then, the enhanced Inception block is developed using the Network-In-Network [25], where multi-scale convolution operations are performed simultaneously, and the convolution outcome of every branch undergoes concatenation. Because of the utilization of a secondary classifier, highly stable outcomes and better gradient convergence can be accomplished, and concurrently disappearing the gradients, and overfitting problems are also discarded. In Inception-v3, one × one convolution kernel is commonly employed for reducing the feature channel count and speed up the training speed. Moreover, the decomposition of large convolutions into small ones also minimizes the number of parameters and computational complexity. Therefore, the Inception v3 model is applied to extract the features from the dermoscopic images.

3.4 Image Classification

At the final stage of image classification, the extracted feature vectors from the Inception v3 model are feed as input to the GBT model to define the presence of skin lesions, i.e., allocate proper class labels of the applied dermoscopic images. The GBT model is trained using the XGBoost using the features obtained in the earlier process [26,27]. The GBT model is non-variant to input scaling, and it learns higher-order interaction among the features. In addition, the GBT model undergoes training in an additive way. At every particular time step t , it grows another tree for minimizing the residuals of the present model. The objective function is defined using Eq. (15):

L(t)=i=1nl(yi,y^it1+ft(xi))+Ω(ft), (15)

where l represents a loss function that determines the variation among the label of the i-th sample yi and the predictive process at the final step along with the current tree output; and Ω(ft) is the regularization norm which penalizes the difficulty of a new tree. Finally, the GBT model generates appropriate class labels of all the applied test skin lesion images.

4  Performance Validation

The performance validation of the presented model takes place on the ISIC dataset [28] comprising images under different classes such as Angioma, Nevus, Lentigo NOS, Solar Lentigo, Melanoma, Seborrheic Keratosis, and Basal Cell Carcinoma (BCC). The images in the ISIC dataset are in the sizes of 640 * 480 pixels. Few sample test images are illustrated in Fig. 3.

images

Figure 3: Sample images

Fig. 3 illustrates the original dermoscopic images with their masked versions. Fig. 4a shows the actual skin lesion image and the lesion region in each image is correctly masked in Fig. 4b.

images

Figure 4: a) Original images b) Masked images

Fig. 5 depicts the confusion matrix obtained by the presented IMLT-DL model on the classification of skin lesions. The figure demonstrated that the IMLT-DL model has proficiently categorized 20 images under Angioma, 44 images under Nevus, 39 images under Lentigo NOS, 67 images under Solar Lentigo, 50 images under melanoma 52 images under Seborrheic Keratosis, and 37 images under BCC.

images

Figure 5: Confusion matrix for proposed IMLT-DL method

Tab. 1 and Figs. 6 and 7 inspect the skin lesion classification results of the IMLT-DL model. The obtained experimental values demonstrated that the IMLT-DL model has appropriately classified the different skin lesion images. For instance, the IMLT-DL model has effectively classified the ‘Angioma’ class with the sensitivity of 0.952, specification of 1, the accuracy of 0.997, the precision of 1, and the G-measure of 0.976. Moreover, the IMLT-DL technique has effectively classified the ‘Nevus’ class with a sensitivity of 0.957, the specificity of 0.996, the accuracy of 0.991, the precision of 0.978, and the G-measure of 0.967. Followed by, the IMLT-DL approach has efficiently classified the ‘Lentigo NOS’ class with the sensitivity of 0.951, specificity of 1, acc. of 0.994, the precision of 1, and G-measure of 0.975. Furthermore, the IMLT-DL model has effectively classified the ‘Solar Lentigo’ class with the sensitivity of 0.985, specificity of 0.996, the accuracy of 0.994, the precision of 0.985, and G-measure of 0.985. Along with that, the IMLT-DL manner has effectively classified the ‘Melanoma’ class with the sensitivity of 0.980, specificity of 1, acc. of 0.997, the precision of 1, and G-measure of 0.990. Concurrently, the IMLT-DL method has effectually classified the ‘Seborrheic Keratosis’ class with the sensitivity of 0.963, specificity of 0.989, acc. of 0.984, the precision of 0.946, and G-measure of 0.954. Simultaneously, the IMLT-DL technique has efficiently classified the ‘BCC’ class with the sensitivity of 1, specificity of 0.986, acc. of 0.987, the precision of 0.902, and G-measure of 0.950.

images

images

Figure 6: Result analysis of IMLT-DL model with different measures

images

Figure 7: Precision and G-measure analysis of IMLT-DL model

A detailed comparative results analysis of the IMLT-DL with other existing methods occurs in Fig. 8 and Tab. 2 [2934]. From the results, it is revealed that the SVM model has showcased worse outcomes with the sensitivity of 0.732, specificity of 0.754, and acc. of 0.743. Next to that, the high-level features model has obtained a slightly increased sensitivity of 0.835, specificity of 0.813, and acc. of 0.811. On continuing with, the CNN model has attained certainly raised the sensitivity of 0.817, specificity of 0.829, and acc. of 0.824. Followed by, the ensemble classifier model has accomplished somewhat intermediate results with the sensitivity of 0.842, specificity of 0.826, and acc. of 0.84. Then, the deep CNN model has demonstrated manageable results with the sensitivity of 0.846, specificity of 0.832, and acc. of 0.843. Eventually, the DLN model has depicted even improved outcomes with the sensitivity of 0.732, specificity of 0.754, and accuracy of 0.743. Meanwhile, the CDNN model has offered a sensitivity of 0.825, specificity of 0.975, and acc. of 0.934, whereas even better sens. of 0.802, specificity of 0.985, and acc. of 0.934 has been demonstrated by the ResNets model. Moreover, the DCCN-GC model has resulted in a reasonable sensitivity of 0.908, specificity of 0.927, and acc. of 0.934. Furthermore, the DL-ANFC model has tried to show near-optimal outcomes with the sensitivity of 0.934, specificity of 0.987, and acc. of 0.979. However, the IMLT-DL model has demonstrated the compared methods with the sensitivity of 0.97, specificity of 0.995, and acc. of 0.992.

images

images

Figure 8: Comparative analysis of IMLT-DL model with existing techniques

Fig. 9 demonstrates the ROC curve analysis of the proposed IMLT-DL model to classify skin lesion images. The figure showcased that the IMLT-DL model has obtained a maximum ROC of 98.765. Therefore, the IMLT-DL model has effectively classified the dermoscopic input images on the classification of skin lesions.

images

Figure 9: ROC analysis of proposed IMLT-DL model

From the tables and figures mentioned above, it is apparent that the IMLT-DL model has accomplished effective skin lesion segmentation and classification outcome. Therefore, it can be an appropriate tool to segment and classify skin lesions using dermoscopic images in a real-time environment.

5  Conclusion

This study has developed a novel IMLT-DL model for effective skin lesion segmentation and a classification model using dermoscopic images. The IMLT-DT model diagnoses the skin lesion using different stages of operations such as pre-processing, segmentation, feature extraction, and classification. At the initial level, the presented IMLT-DL model integrates the Top hat filtering and inpainting technique for the pre-processing of the dermoscopic images. Then, multilevel thresholding-based segmentation is carried out to determine the infected skin lesion regions in the dermoscopic images. Inception v3 based feature extraction and GBT based classification processes are performed for effective skin lesion detection. The proposed IMLT-DL model is simulated using the ISIC dataset and the experimental outcomes are examined concerning several measures. The obtained simulation outcomes verified the superior performance of the IMLT-DT model by accomplishing a maximum accuracy of 0.992. In the future, the performance of the skin lesion segmentation process can be improved using advanced DL-based instantaneous segmentation techniques.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. N. Razmjooy, M. Ashourian, M. Karimifard, V. V. Estrela, H. J. Loschi et al., “Computer-aided diagnosis of skin cancer: A review,” Current Medical Imaging, vol. 16, no. 7, pp. 781–793, 2020.
  2. O. T. Jones, C. K. Ranmuthu, P. N. Hall, G. Funston and F. M. Walter, “Recognising skin cancer in primary care,” Advances in Therapy, vol. 37, no. 1, pp. 603–616, 2020.
  3. J. Feng, N. G. Isern, S. D. Burton and J. Z. Hu, “Studies of secondary melanoma on C57BL/6J mouse liver using 1H NMR metabolomics,” Metabolites, vol. 3, no. 4, pp. 1011–1035, 201
  4. A. Jemal, R. Siegel, E. Ward, Y. Hao, J. Xu et al., “Cancer statistics,” CA Cancer J. Clin., vol. 69, no. 1, pp. 7–34, 2019.
  5. T. Tarver and J. Consum, “Health internet 2012,” American Cancer Society: Cancer Facts and Figures, vol. 16, no. 1, pp. 366–367, 2014.
  6. R. Siegel, K. Miller and A. Jemal, “Cancer statistics, 2018,” CA Cancer J. Clin., vol. 68, no. 1, pp. 7–30, 2018.
  7. G. Pellacani and S. Seidenari, “Comparison between morphological parameters in pigmented skin lesion images acquired using epiluminescence surface microscopy and polarized-light video microscopy,” Clinical Dermatology, vol. 20, no. 1, pp. 222–227, 2002.
  8. A. R. A. Ali and T. M. Deserno, “A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data,” in Proc. Medical Imaging 2012: Image Perception, Observer Performance, and Technology Assessment, Bellingham, WA, USA, International Society for Optics and Photonics, pp. 8318, 2012.
  9. E. S. Madhan, S. Neelakandan and R. Annamalai, “A novel approach for vehicle type classification and speed prediction using deep learning,” Journal of Computational and Theoretical Nano science, vol. 17, no. 5, pp. 2237–2242, 2020.
  10. S. M. Jaisakthi, P. Mirunalini and C. Aravindan, “Automated skin lesion segmentation of dermoscopic images using grabcut and kmeans algorithms,” IET Comput. Vis., vol. 12, no. 1, pp. 1088–1095, 2018.
  11. P. Agrawal, D. Chaudhary, V. Madaan, A. Zabrovskiy, R. Prodan et al., “Automated bank cheque verification using image processing and deep learning methods,” Multimedia Tools and Applications, vol. 80, no. 1, pp. 5319–5350, 2021.
  12. V. Madaan, A. Roy, C. Gupta, P. Agrawal, A. Sharma et al., “XCOVNet: Chest X-ray image classification for covid-19 early detection using convolutional neural networks,” New Gener. Comput., vol. 39, no. 2, pp. 1–15, 2021.
  13. M. Aljanabi, Y. E. Özok, J. Rahebi and A. S. Abdullah, “Skin lesion segmentation method for dermoscopy images using artificial bee colony algorithm,” Symmetry, vol. 10, no. 1, pp. 347–354, 2010.
  14. A. Pennisi, D. D. Bloisi, D. Nardi, A. R. Giampetruzzi and C. Mondino, “Skin lesion image segmentation using delaunay triangulation for melanoma detection,” Computerized Medical Imaging and Graphics, vol. 52, no. 1, pp. 89–103, 2016.
  15. L. Bi, J. Kim, E. Ahn, D. Feng and M. Fulham, “Automated skin lesion segmentation via image-wise supervised learning and multi-scale superpixel based cellular automata,” in Proc. of the Int. Symp. on Biomedical Imaging, Prague, Czech Republic, pp. 1059–1062, 2016.
  16. L. Bi, J. Kim, E. Ahn, A. Kumar and M. Fulhan, “Dermoscopic image segmentation via multi-stage fully convolutional networks,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 1, pp. 2065–2074, 2017.
  17. Y. Yuan, “Automatic skin lesion segmentation with fully convolutional-deconvolutional networks,” arXiv preprint, arXiv:1703,05165, 20
  18. M. Berseth, “Skin lesion analysis towards melanoma detection,” International Skin Imaging Collaboration, vol. 18, no. 2, pp. 13–18, 2017.
  19. D. Paulraj, “An automated exploring and learning model for data prediction using balanced CA-SVM,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, pp. 1–12, 2020.
  20. M. Y. Sikkandar, B. A. Alrasheadi, N. B. Prakash, G. R. Hemalakshmi, A. Mohanarathinam et al., “Deep learning based an automated skin lesion segmentation and intelligent classification model,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, pp. 1–11, 20
  21. J. N. Kapur, P. K. Sahoo and A. K. Wong, “A new method for gray-level picture thresholding using the entropy of the histogram,” Computer Vision, Graphics, and Image Processing, vol. 29, no. 1, pp. 273–285, 1985.
  22. E. H. Houssein, B. E. D. Helmy, D. Oliva, A. A. Elngar and H. Shaban, “A novel black widow optimization algorithm for multilevel thresholding image segmentation,” Expert Systems with Applications, vol. 167, no. 1, pp. 114–159, 2021.
  23. Z. M. Gao, J. Zhao, S. R.Li and Y. R. Hu, “The improved mayfly optimization algorithm,” Journal of Physics: IOP Conference Series, vol. 1684, no. 1, pp. 12077, 2020.
  24. A. Krizhevsky, I. Sutskever and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. of the 25th Int. Conf. on Neural Information Processing Systems, Lake Tahoe, Nevada, USA, pp. 1097–1105, 2012.
  25. C. Lin, L. Li, W. Luo, K. C. Wang and J. Guo, “Transfer learning based traffic sign recognition using inception-v3 model,” Periodica Polytechnica Transportation Engineering, vol. 47, no. 3, pp. 242–250, 2019.
  26. Y. Liu, Y. Gu, J. C. Nguyen, H. Li, J. Zhang et al. “Symptom severity classification with gradient tree boosting,” Journal of Biomedical Informatics, vol. 75, pp. 105–111, 2017.
  27. S. Neelakandan and D. Paulraj, “A gradient boosted decision tree-based sentiment classification of twitter data,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 18, no. 4, pp. 1–21, 2020.
  28. S. Divyabharathi, “Large scale optimization to minimize network traffic using MapReduce in big data applications,” International Conference on Computation of Power Energy Information and Communication, pp. 193–199, 2016.
  29. D. Połap, A. Winnicka, K. Serwata, K. Kęsik and M. Woźniak, “An intelligent system for monitoring skin diseases,” Sensors, vol. 18, no. 8, pp. 25–52, 2018.
  30. S. Satpathy, P. Mohan, S. Das and S. Debbarma, “A new healthcare diagnosis system using an IoT-based fuzzy classifier with FPGA,” Journal of Supercomputing, vol. 76, no. 8, pp. 5849–5861, 2020.
  31. H. M. Ünver and E. Ayan, “Skin lesion segmentation in dermoscopic images with combination of YOLO and grabcut algorithm,” Diagnostics, vol. 9, no. 3, pp. 72, 2019.
  32. Y. Yuan, M. Chao and Y. C. Lo, “Automatic skin lesion segmentation with fully convolutional-deconvolutional networks,” arXiv preprint, arXiv:1703.05165, 2017.
  33. S. Satpathy, M. Prakash, S. Debbarma, A. S. Sengupta and B. K. D. Bhattacaryya, “Design a FPGA, fuzzy based, insolent method for prediction of multi-diseases in rural area,” Journal of Intelligent & Fuzzy Systems, vol. 37, no. 5, pp. 7039–7046, 2019.
  34. L. Bi, J. Kim, E. Ahn and D. Feng, “Automatic skin lesion analysis using large-scale dermoscopy images and deep residual networks,” arXiv preprint, arXiv:1703.04197, 2017.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.