Sindhuja Manickavasagam1,*, Poonkuzhali Sugumaran2
1 Department of Information Technology, Rajalakshmi Engineering College, Chennai, 600125, Tamilnadu, India
2 Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, 600125, Tamilnadu, India
* Corresponding Author: Sindhuja Manickavasagam. Email:
Computer Systems Science and Engineering 2023, 45(1), 939-953. https://doi.org/10.32604/csse.2023.030491
Received 27 March 2022; Accepted 19 May 2022; Issue published 16 August 2022
Lung cancer (LC) is the major and primary reason for cancer death in both men and women. Demonstration of LC in the body parts of the patient exposes via earlier symptoms in several persons . Treatments are undergone and projection rest on the diagnosis types of cancers, the stages (extent of spreading), and victim outcome status. Likely medication contains surgery, radiotherapy, and chemotherapy Persistence relies upon the stages, overall health of a person, and other determiners, however entirely only 14% of an individual recognized with LC live 5 years after the recognition . The death rate and prevalence of tobacco consumption are comparatively high. Generally, LC grows inside the wall or epithelium of the respiratory tree . However, it could be started anywhere else in the lung region and affect other parts of the lung system. LC is most probably affected persons between the ages of 55 and 65 and frequently takes several years for development . There are 2 major classifications of LC one is small cell LC (SCLC) or oat cell cancer and another one is Non-small cell LC (NSCLC).
Every type of LC develops and spreads in different means and it is treated accordingly . In case of cancer has characteristics of both verities, it is known as mixed small cell or large cell cancers. Non-small cell LC is more common than SCLC and it has usual growth and also spreads more gradually . SCLC is nearly associated with smoking and growing rapidly and forms huge tumors which may spread extremely all over the body. They frequently started in the bronchi near the centre of the chest. LC death rate is associated with the total number of cigarettes consumed . Pre-diagnosis aids us to recognize or narrow down the probability of screening for LC disease. Signs of LC and risk aspects (obesity, smoking, alcohol drinking, and insulin resistance) had a statistically important effect in early diagnosis phase . The LC diagnostic and prognostic problems are mostly in the scope of the extremely discussed grouping problems [9,10]. These issues have inspired many of the investigators in statistics domain, computer intelligence, and data mining.
In , a k-Nearest-Neighbor method, where a genetic algorithm is employed to the effectual feature election to minimize the data dimension and augment classification performance, is applied for identifying the phase of patient disease. To enhance the performance of the presented method, the optimal value for k is defined by the experiment technique. Agrawal et al.  examine the lung tumor dataset obtainable from the SEER by proposing precise survival predictive model for lung cancer with data mining technique. Then, the developed pre-processing step led to splitting or removal, or modification of attribute, and 2 of the 11 derived attributes have been found to have important prediction energy. Data mining classification technique is utilized on the pre-processed dataset and validation and data mining optimization. In , appropriate integration of Adoptive thresholding segmentation approach was employed to segment input images, a familiar Support Vector Machine (SVM) image classification approach was employed for classifying lung cancer and Content-based image retrieval method is utilized for comparing lung image features namely intensity, contract, shape, and texture. Lim et al.  proposed a bioinformatics pipeline and standardized data that is pre-processed by a strong statistical method; allows other to implement largescale meta-analysis, without conducting statistical correction and time-consuming data mining.  explored the part of Chinese prescription in non-small cell lung cancer (NSCLC) and offer reference for the prescription and herbs applications. Randomized and quasi-randomized controlled medical trials on Chinese herbal medication in the treatment of NSCLC have been gathered from 7 datasets to determine a dataset of prescriptions on NSCLC.
This study develops a Ant lion Optimization with Optimal Deep Belief Network (ODBN) model named ALO-ODBN for Lung Cancer Detection and Classification. Initially, the ALO-ODBN model undergoes min-max data normalization approach to preprocess the input data. Besides, the ALO algorithm gets executed to choose an optimal subset of features. In addition, the DBN model receives the chosen features and performs lung cancer classification. Finally, the Adam optimizer is utilized for hyperparameter optimization of the DBN model. In order to report the enhanced performance of the ALO-ODBN model, a wide-ranging experimental analysis is performed and the results reported the supremacy of the ALO-ODBN model.
In this study, a new ALO-ODBN model has been developed for Lung Cancer Detection and Classification. Initially, the ALO-ODBN model undergoes min-max data normalization approach to preprocess the input data. Besides, the ALO algorithm gets executed to choose an optimal subset of features. In addition, the DBN model receives the chosen features and performs lung cancer classification. Finally, the Adam optimizer is utilized for hyperparameter optimization of the DBN model. Fig. 1 depicts the overall work flow of ALO-ODBN technique.
Initially, the ALO-ODBN model undergoes min-max data normalization approach to pre-process the input data. In any ML model, data normalization is widely utilized to attain proficient results . The features values can different from small to large values. So, the normalization process is employed for scaling the features as given below.
The recent meta heuristic method of ant lion optimization algorithm is the interaction of antlions and ants. mall cone shaped traps are made by the antlions, in which they hide and wait for their prey. The major steps involved in the tuning process are illustrated below with the help of steps given below
The ant lion optimization algorithm simulates the hunting mechanism of the antlions. The following subsections describe the steps of the algorithm.
The random walk of ants, where, calculates the cumulative sum of successive random jumps leading to a random walk, is a stochastic function and is a random number generated with uniform distribution in the interval of [0, 1] and t denotes the max mum number of iterations.in blow Fig. 2 represent that the flow chart for ALO algorithm
The fitness values for the ants are stored in the form of a matrix and are a function of the objective function. In the same manner the fitness values of the antlions are also stored in another matrix. During each step f the iteration, the random walk of the ants is to be confined within the boundary of the search space. This is executed by Eq. (3), where and are the minimum and maximum step of the random walk of the th ant during the th iteration and C and D are the inferior and superior restraints of the random walk.
The ants randomly walk without seeing the trap, so, it may fall down into the cone shaped trap. This is realized by adaptively decreasing the radius of the random walk as shown.
where, is the minimum value corresponding to tth iteration and is the corresponding maximum value, i is a ratio given by Eq. (5).
Here, t is the current iteration and T is the maximum iteration and w is a constant.
Trapping of ant by antlion
The range of the random walk of the ith ant during tth iteration is modified as below in order to model the trapping behavior.
where, antlion is the position of the jth antlion during ith iteration
At this stage, the DBN model receives the chosen features and performs lung cancer classification. Typically, DBN is constructed by stacking Restricted Boltzmann Machine (RBM) that captures higher-order correlation that is noted in the visible unit. DBN is pretrained in an unsupervised greedy layer-wise manner for learning a stack of RBM through the Contrastive Divergence (CD) approach. The output depiction of RBM is utilized as the input dataset to train the RBM in the stack. Afterward the pretraining, the DBN is finetuned by BP of error derivative and the initial weight and bias of all the layers are corrected. RBM is an approach to represent each training sample in a compact and meaningful manner, by capturing the regularities and inner structure. This is realized by presenting an additional set of neurons named hidden unit to the network that value is indirectly fixed from training dataset . On the other hand, visible unit obtains the value directly from training dataset. Obviously, the network contains three hidden nodes and four visible nodes. In the forward pass, the output is generated through multiplying all the inputs, by respective weight and sum up each product. Then, the summation is included to bias, b, and lastly, the outcome is passed through an activation function f to generate the node output, . For learning the connection weight of the system, we need to emphasize how they learn to recreate data in an unsupervised manner, which makes various forward and backward passes among the visible and hidden layers without including a deep network. In the recreation stage, the output activation of the forward pass, , denotes the input to the backward pass. It is multiplied with the similar weight of the forward pass, , as well as the summation of the product is added to visible-layer bias that produces the concluding output of operation that is the reconstruction of the original in- put, . Definitely, the recreated value doesn’t match the original one. In another word, the recreation is making guess regarding the possibility distribution of the novel input; that is, value of different points simultaneously. Fig. 3 showcases the framework of DBN technique.
Consider the input dataset and the reconstruction is standard curve of distinct shapes. The aim is to reduce the error or diverging area in the two curves, named Kullback-Leibler Divergence, with RBM optimization approach CD . CD learning accurately follows the gradient of the variance of two KL-Divergences. This process assists in iteratively altering the weight and then estimating the actual information that makes the two-possibility distribution converge.
Therefore, the possibility distribution of a group of novel input x, and the recreated dispersal whereas the incorporation of the difference.
Consider P and Q represent the distribution of a constant arbitrary parameter; the KL-Divergence equation is determined below:
Now, p and q denote the density of P and
For good understanding of the CD approach, mathematical calculation is included.
Now v and h denote visible and hidden layers of RBM, correspondingly. The energy of joint configuration, (v, h) of the visible and hidden layers are determined below:
Now , indicates the binary state of visible node i and hidden node j, and represent the bias value, and indicates the weight among and
Next, the joint possibility through v and h is calculated by:
Assume indicates the possibility that RBM takes visible vector v over summing by each probably hidden vector:
The equation to update weight is estimated by taking derivative of regarding weight :
Here indicates the learning rate and represent the expectancy in the model or data distribution.
The conditional probability of provided v and shown h are given below:
Now, indicates the sigmoid function.
Having Eqs. (12), (14), and (15), it is easier to attain an un-biased instance of while there are no straightforward interconnections among visible and hidden nodes in RBM. In contrast, an unbiased instance of is very complex. Gibb’s sampling is utilized by upgrading each hidden and visible node separately resulting in a slower convergency. To resolve this problem, the CD approach generates a recreation of a visible vector by setting visible node to one with possibility when the binary state has been calculated for hidden node. The weight upgrade is shown below
Now, recon represents the recreation stage.
Finally, the Adam optimizer is utilized for hyperparameter optimization of the DBN model. Simultaneously, the hyperparameter optimization of the DBN was performed by Adam optimizer. It is applied for estimating an adaptive learning value where the variable is employed for training the variable of the DNN . It is effective and elegant technique for the first-order gradient with controlled storage for stochastic optimization. Here, the lately presented technique has been applied for resolving the ML problem with higher dimension variable space, and massive data evaluates the learning rate for various characteristics from calculation with initial and another order moments. In addition, the Adam optimizer is commonly employed according to the momentum and gradient descent (GD) methods, and a variation of an interval. Consequently, initial momentum has been gained b7:
The next momentum is expressed by,
Here and .
This section inspects the performance validation of the ALO-ODBN model using benchmark lung cancer dataset . It comprises 32 samples with 56 features and 3 class labels.
Fig. 4 demonstrates a collection of confusion matrices produced by the ALO-ODBN model on distinct runs. On run-1, the ALO-ODBN model has identified 9, 13, and 9 class labels respectively. Moreover, on run-2, the ALO-ODBN methodology has identified 9, 12, and 10 class labels correspondingly. Furthermore, on run-3, the ALO-ODBN approach has identified 9, 12, and 9 class labels respectively. Along with that, on run-4, the ALO-ODBN technique has identified 9, 11, and 9 class labels respectively. In line with, on run-5, the ALO-ODBN algorithm has identified 9, 13, and 8 class labels correspondingly.
Tab. 1 and Fig. 5 demonstrates brief classification results of the ALO-ODBN model using distinct runs. The results indicated that the ALO-ODBN model has accomplished maximum classification performance. For instance, with run-1, the ALO-ODBN model has provided an average , , , and of 97.92%, 97.62%, 96.67%, and 97.01% respectively. In addition, with run-2, the ALO-ODBN technique has offered an average , , , and of 97.92%, 96.67%, 97.44%, and 96.91% respectively. Meanwhile, with run-4, the ALO-ODBN method has provided an average , , , and of 93.75%, 91.67%, 91.54%, and 90.71% correspondingly. Eventually, with run-5, the ALO-ODBN system has provided an average , , , and of 95.83%, 93.94%, 93.33%, and 92.96% correspondingly.
The training accuracy (TA) and validation accuracy (VA) attained by the ALO-ODBN approach on lung cancer classification is demonstrated in Fig. 6. The experimental outcomes implied that the ALO-ODBN model has gained maximum values of TA and VA. In specific, the VA is seemed to be higher than TA.
The training loss (TL) and validation loss (VL) achieved by the ALO-ODBN model on lung cancer classification are established in Fig. 7. The experimental outcomes inferred that the ALO-ODBN technique has accomplished least values of TL and VL. In specific, the VL is seemed to be lower than TL.
A brief precision-recall examination of the ALO-ODBN model on test dataset is represented in Fig. 8. By observing the figure, it can be stated that the ALO-ODBN approach has accomplished maximal precision-recall performance under test dataset.
A brief ROC investigation of the ALO-ODBN method on test dataset is depicted in Fig. 9. The results exposed that theALO-ODBN model has exhibited its ability in categorizing three different classes such as class 1, class 2, and class 3 on the test datasets.
For ensuring the betterment of the ALO-ODBN model, a comparison study is made with existing models in Tab. 2 . A detailed and investigation of the ALO-ODBN model with existing models is performed in Fig. 9. The result indicated that the SVM model has shown least performance with minimal values of and . In line with, the Radial Basis Function (RBF) Neural Network has accomplished slightly improved outcome with values of and . In addition, the DNN and DT (C4.5) model has shown reasonable values of and However, the ALO-DBN model has exhibited superior and of 97.92% and 97.62% respectively.
A brief and investigation of the ALO-ODBN method with existing techniques is implemented in Fig. 10. The outcome referred that the SVM approach has shown least performance with minimal values of and . Likewise, the RBF Neural Network has accomplished somewhat enhanced outcome with values of and . Besides, the DNN and DT (C4.5) approach has revealed reasonable values of and shown in Fig. 11. At last, the ALO-DBN approach has exhibited superior and of 96.67% and 97.01% correspondingly.
From the above mentioned tables and figures, it is apparent that the ALO-ODBN model has outperformed the other methods interms of different measures. The overall survival rate detection of patient is improved when compared with other existing methods.
In this study, a new ALO-ODBN model has been developed for Lung Cancer Detection, Classification and survival rate prediction. The proposed ALO-ODBN model aims to identify and classify the presence of lung cancer. Initially, the ALO-ODBN model undergoes min-max data normalization approach to preprocess the input data. Besides, the ALO algorithm gets executed to choose an optimal subset of features. In addition, the DBN model receives the chosen features and performs lung cancer classification. Finally, the Adam optimizer is utilized for hyperparameter optimization of the DBN model. In order to report the enhanced performance of the ALO-ODBN model, a wide-ranging experimental analysis is performed and the results reported the supremacy of the ALO-ODBN model.
Funding Statement: The authors received no specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study