The Internet of Medical Things (IoMT) emerges with the vision of the Wireless Body Sensor Network (WBSN) to improve the health monitoring systems and has an enormous impact on the healthcare system for recognizing the levels of risk/severity factors (premature diagnosis, treatment, and supervision of chronic disease i.e., cancer) via wearable/electronic health sensor i.e., wireless endoscopic capsule. However, AI-assisted endoscopy plays a very significant role in the detection of gastric cancer. Convolutional Neural Network (CNN) has been widely used to diagnose gastric cancer based on various feature extraction models, consequently, limiting the identification and categorization performance in terms of cancerous stages and grades associated with each type of gastric cancer. This paper proposed an optimized AI-based approach to diagnose and assess the risk factor of gastric cancer based on its type, stage, and grade in the endoscopic images for smart healthcare applications. The proposed method is categorized into five phases such as image pre-processing, Four-Dimensional (4D) image conversion, image segmentation, K-Nearest Neighbour (K-NN) classification, and multi-grading and staging of image intensities. Moreover, the performance of the proposed method has experimented on two different datasets consisting of color and black and white endoscopic images. The simulation results verified that the proposed approach is capable of perceiving gastric cancer with 88.09% sensitivity, 95.77% specificity, and 96.55% overall accuracy respectively.
Internet of Medical Things (IoMT) in other word Wearable Internet of Things (WIoT) is a promising technology that modernizes the existing healthcare systems with the fusion of the Internet of Things (IoT), Artificial Intelligence (AI), and Wireless Body Sensor Network (WBSN) as a smart healthcare system. IoMT is gaining tremendous research interest in healthcare applications to provide remote global healthcare anytime and anywhere with special attention to emergencies and avoid unfortunate incidents. The concept of IoT is originated by Kevin Ashton in 1999 i.e., “uniquely identifiable interoperable connected things/objects with Radio Frequency Identification (RFID) technology” [
A wireless endoscopic capsule acts as a biosensor node made up of a micro-imaging camera with a wireless circuit and localization software. While a data recorder acts as a sink node attached to the human body. When the capsule passes through the gastrointestinal tract, 50,000 frames/images are captured in eight hours and transmitted towards the base station via a recorder. Although, it produces a huge amount of patient data, however, processing and making intelligent and feasible decisions are time-consuming procedures. Scientists and researchers have been trying to imitate the human brain’s capability via AI techniques and models to create autonomous and intelligent applications that can make intelligent decisions without human intervention. The existing AI techniques are used to provide instinctive resource provision and acquire the hidden knowledge by processing the raw data to determine regular patterns which make better predictions and critical decisions in medical diagnosis. In addition, IoMT has brought the idea of the smarter world into a reality along with the massive extent of numerous services. Nevertheless, it supports caregivers and patients to improve the quality of life, understand the health risks, and early diagnosis, treatment, and management of chronic diseases (for instance, abnormalities of heart/cardiovascular issues, diabetes, cancer (brain, stomach/lungs and skin, etc.)) via wearable health sensors (pacemaker, endoscopic capsules, etc.) along with a wireless communication link to a hub (recorder) placed on the waist or nearby the human body. Although, the life of a human being can be improved if various disorders and diseases are predicted at the preliminary stage before they become worst and unsafe via recognizing the vital signs. The scientific contribution of this paper can be summarized as follows.
We perform a critical analysis of various types of cancer towards gastric cancer along with most relevant literature. We propose an optimized AI-based approach for smart healthcare application of IoMT to diagnose and discriminate various types, stages, and grades of gastric cancer based on the endoscopic images respectively. We calculate and compare the efficacy of the proposed approach by specificity, sensitivity, and overall accuracy with the existing state-of-the-art methods and demonstrate that our proposed method achieves superior performance.
The rest of the paper is organized as follows. Section 2 investigates and summarizes some of the existing gastric cancer detection approaches along with the basic concept of cancer and highlights their limitations. Section 3 describes an in-depth overview of the proposed optimized AI-based approach for the smart healthcare applications. Section 4 presents the experimental results. Finally, Section 5 concludes the paper and outlines the future direction.
The human body is composed of a number of cells that grow and split to form new cells. When cells become damaged or grow old, they expire, and new cells replace them. When cancer/tumor arises, the normal formation cycle of cells is completely disturbed as the cancer cells grow nonstop in trillions anywhere into surrounding tissues in the human body. Gastric cancer is dangerous and slow-growing tumors that are usually found in the gastrointestinal tract begin from the oral cavity to the rectum where food is expelled. The main function of this tract is to split the food into nutrients via performing various functions. In the case of gastrointestinal disorder or disease, this function is not achieved. Consequently, it is very significant to identify gastric cancer at early stages.
In this context, medical imaging (segmentation) plays a vital role and has a great impact on medicine, identification, and treatment of multidisciplinary diseases. The examination of cancer imaging entirely depends on the professional doctors with high expertise and clinical experiences. Although, the growing amount of medical imaging data has carried out additional challenges to radiologists. Nowadays, deep and machine learning techniques are getting remarkable progress in the gastric cancer domain based on their learning capacities and competent computational power. These methods provide an effective solution for automatic classification and segmentation to accomplish high-precision intelligent identification of cancers [
However, a modified CNN classification model is proposed for Gastric Intestinal Metaplasia (GIM) [
This section describes the overall procedure of the proposed optimized AI-based approach which is entirely based on the identification and risk assessment of the gastric cancer. The proposed method is categorized into five phases such as image preprocessing, Four-Dimensional (4D) image conversion, image segmentation, K-Nearest Neighbour (K-NN) classification, and multi-grading and staging of image intensities. Moreover, two different datasets are considered such as (i) Dataset I: black and white and (ii) Dataset II: color captured/unprocessed endoscopic images taken from American Cancer Society (ACS) and Kvasir respectively [
This is the preliminary phase of the proposed approach which aims to enhance the image quality by eliminating the redundant information and noise from the captured gastrointestinal endoscopic 3D (430 × 476) images from both Datasets I and II via a denoising method i.e., transforms domain filtering (Gaussian filter). Furthermore, the additional background containing unnecessary information is removed via a grab cut technique. Afterward, the color endoscopic images (Dataset II) are converted into gray-scale before inputting them to the next stage for further processing.
In this phase, the preprocessed endoscopic images are converted into 4D (440 × 510) spatial images to identify more clear vision and achieve the necessary precision for the detection of minute tumors of the beginning stage in the stomach. In this context, a Framework Standard Library (FSL) tool is used to increase the resolution and pixel dimensions (pixel size 360 mm × 360 mm while the distance between the slides is 1 mm and the width of the slide 6 mm) which target the flow of pixels.
In this phase, the 4D endoscopic images are categorized into meaningful sections or regions based on certain criteria such as color, intensity, texture, and their combinations to identify tumors. A non-flexible transformation approach such as Gaussian mixing model is used to increase the signal-to-noise ratio (square root of the number of photons in the brightest part of the image) of concurrent 4D spatial/phonological endoscopic images and obtain the required deformation matrices. Moreover, edge-based and location-based detections have been done in the endoscopic images to consider the pixels and their boundaries in terms of position and region of the tumor cell in the stomach respectively. Each pixel in a region is measured in terms of the boundaries of tumor cells. Consequently, a local segmentation has been done with the specific part of cancer to improve the visual effects of the endoscopic images. A threshold value is defined to measure the pixel intensity. The pixels below the defined threshold values are converted into black (bit value equals zero) and the pixels above the threshold values are converted to white (one bit value) using the histogram.
In this phase, a supervised machine learning algorithm i.e., K-NN is used to solve the classification and regression problems. It stores and sorts all the available states and new states based on similarities (e.g., distance functions) respectively. The condition of endoscopic images is classified by the majority of the calculated distance of the tumor cells. The Xi, Yi, and q are considered for categorizing the cancer stages, grades in terms of Histogram of Oriented Grading (HOG) expose the detailed distance or location of cancer in stomach as an object from one point to another and indicate the specific value of cancer in solid form with maximum intensity (frequency occurrence) that will measure the grading and stages of cancer. Besides, the condition of the most common class is assigned to its closest K neighbors if the value of K is equal to one. The distance of tumor cells is true for static variables and calculated by
The simplest way to determine the optimal value of K is to first analyze the data. The higher value of K is more reliable because it eliminates the remaining noise. Furthermore, the cross validation has been done to evaluate a successful value of K (K-mean) using an unbiased dataset.
The cancer classification shows the irregular shape of cancer cells and tissues. In this phase, the gastric cancer is categorized according to the types, grades, and stages of cancer. Various types, grades, and stages of cancer along with various ranges are defined in
Type | Grade | Stage | Tumor size/Range accuracy (%) |
---|---|---|---|
Gastric-glioma | Low-grade-I | 0 | 41–100 |
Low-grade-II | 1 | ||
2 | 80–300 | ||
Gastric-meningioma | High-grade-III | 3 | 300–500 |
Gastric-glioblastoma | High-grade-IV | 4 |
With the proposed method described in Section 3, various experiments are conducted using MATLAB to identify gastric cancer and measure the classification performance in terms of grading and staging. For initial testing, 150 endoscopic images were taken from each dataset (I and II). Out of 300 images, 250 images include stage 0–4 anomalies while 50 images consist of inflammatory anomalies. 4D_im1 and 4D_im7 represent the first stage and low-grade-I of Glioma cancer with the size of 74 cm (B/W and C) and maximum intensity B/W: 8945 and C: 5520. 4D_im2 and 4D_im8 represent the second stage and low-grade-II of Glioma cancer with the size of 254 cm (B/W) and 95 cm (C), and maximum intensity B/W: 64502 and C: 8945. 4D_im3 represents the third stage and high-grade-III of Meningioma cancer with the size of 360 cm (B/W) and the maximum intensity B/W: 129672. 4D_im4 represents the fourth stage and high-grade-IV of Glioblastoma cancer with the size of 380 cm (B/W) and maximum intensity B/W: 144348. 4D_im5 and 4D_im6 represent the second stage and low-grade-II of Glioma cancer with the size of 85 and 144 cm (B/W) with maximum intensity B/W: 7288 and 20855. 4D_im9 represents the second stage and high-grade-II of Glioma cancer with the size of 95 cm (C) and the maximum intensity C: 12672. 4D_im10 represents the second stage and high-grade-II of Glioma cancer with the size of 210 cm (C) and maximum intensity C: 43914. 4D_im11 and 4D_im12 represent the fourth stage and high-grade-IV of Glioblastoma cancer with the size of 383 and 469 cm (C) and maximum intensity C: 146455 and 219658.
The prevalence of gastric cancer is evaluated by sensitivity, specificity, MCC and overall accuracy. However, sensitivity and specificity are statistical measures of performance of the binary classification test that is widely used in medicine and are defined as follows:
We have used the Sensitivity/recovery (Se) is the TP ratio, Specific (Sp) is the TN ratio, Accuracy (Acc) is the positive predictive rate and the F score measures classification performance in terms of recall and accuracy. All metrics are defined by
Moreover, the proposed method is compared with the existing methods in terms of overall accuracy. The existing methods are based on feature extraction of the gastric cancer for other purposes. In [
Cancer type | Grade | Stage | Tumor size | Individual result (based on presented images) | Overall result (based on each stage |
|||||
---|---|---|---|---|---|---|---|---|---|---|
C | B/W | Sensitivity |
Specificity |
Accuracy |
Sensitivity |
Specificity |
Accuracy |
|||
Gastric-glioma | Low-grade |
1 | 74 | 74 | 88.54 | 95.45 | 96.41 | 88.54 | 95.45 | 96.41 |
Low-grade (II) | 2 | 95 | 254 | 85.10 | 94.15 | 96.55 | ||||
- | 85 | 86.22 | 95.21 | 96.32 | 87.55 | 95.83 | 96.45 | |||
95 | - | 89.15 | 96.12 | 96.57 | ||||||
- | 144 | 88.60 | 96.29 | 96.32 | ||||||
210 | - | 88.72 | 97.38 | 96.52 | ||||||
Gastric-meningioma | High-grade (III) | 3 | - | 360 | 88.08 | 95.16 | 96.62 | 88.08 | 95.16 | 96.62 |
Gastric-glioblastoma multiforme | High-grade (IV) | 4 | - | 380 | 88.19 | 95.20 | 96.72 | 88.84 | 95.99 | 96.75 |
383 | - | 89.15 | 96.29 | 96.73 | ||||||
469 | - | 89.18 | 96.50 | 96.80 |
In this paper, we have proposed an optimized AI-based approach that identifies and assesses the severity factor of gastric cancer to enhance the entire identification and classification results of the endoscopic images. For this purpose, we examined two datasets that categorized gastric cancer in color and black and white endoscopic images based on the type, grades, and stages respectively. Consequently, the experimental results determine the effectiveness of the proposed method to make an accurate decision regarding the four grades with stages of gastric cancer. The simulation results verified that the proposed method achieved improved results in terms of aggregated sensitivity by 88.09%, specificity by 95.77% and accuracy by 96.55% respectively. In the future, we will aim to extend our current work for the detailed classification of each grade by investigating the light field tools using K-NN algorithm to achieve a balance between efficiency and accuracy.
The authors would like to thank Universiti Teknologi Malaysia, for providing the environment to conduct this research work. In addition, one of the authors of this paper would like to also thank Sule Lamido University, Kafin Hausa, Nigeria, for generous support to pursue his postgraduate studies.
The authors extend their appreciation to the
The authors declare that they have no conflicts of interest to report regarding the present study.