Nowadays, the cloud environment faces numerous issues like synchronizing information before the switch over the data migration. The requirement for a centralized internet of things (IoT)-based system has been restricted to some extent. Due to low scalability on security considerations, the cloud seems uninteresting. Since healthcare networks demand computer operations on large amounts of data, the sensitivity of device latency evolved among health networks is a challenging issue. In comparison to cloud domains, the new paradigms of fog computing give fresh alternatives by bringing resources closer to users by providing low latency and energy-efficient data processing solutions. Previous fog computing frameworks have various flaws, such as overvaluing response time or ignoring the accuracy of the result yet handling both at the same time compromises the network community. In this proposed work, Health Fog is integrated with the Optimized Cascaded Convolution Neural Network framework for diagnosing heart disease. Initially, the data is collected, and then pre-processing is done by Linear Discriminant Analysis. Then the features are extracted and optimized using Galactic Swarm Optimization. The optimized features are given into the Health Fog framework for diagnosing heart disease patients. It uses ensemble-based deep learning in edge computing devices, which automatically monitors real-life health networks such as heart disease analysis. Finally, the classifiers such as bagging, boosting, XGBoost, Multi-Layer Perceptron (MLP), and Partitions (PART) are used for classifying the data. Then the majority voting classifier predicts the result. This work uses FogBus architecture and evaluates the execution of power usage, bandwidth of the network, latency, execution time, and accuracy.
Fog and Cloud general-purpose computing has developed as the foundation of the market world, relying on the Internet to supply customers with on-demand products. Each of these fields has attracted a loT of interest from industry and academics. Cloud computing, on the other hand, is not a good alternative for programs that demand real-time reaction because of the substantial time latency. Edge computing, fog computing, the Internet of Things (IoT), and Big Data are examples of how technological advances have gained traction. Due to their durability and capacity, the variety of performance parameters depends on the intended applications [
Cloud and fog computing concepts have gotten a lot of attention and have become a foundation again for the market world, which relies on Internet services to supply customers with on-demand operational resources. Both industry and academia have adopted these subjects as vital components. Due to the considerable time delay, cloud computing is not ideal for implementations to get answers [
In the world of professionals and academia, those two variables have gotten a lot of attention. However, because of the significant delay in reaction time, cloud technology is not a good choice for programs that demand constant feedback. Because of their characteristics as warm-heartedness and capacity to offer reaction attributes. Depending on the monitored desired applications, developments and improvements such as big data management with the Internet of Things (IoT), Fog computing, and Edge computing have become crucial [
Fog computing flawlessly coordinates delay and consistent implementations thanks to high volume data storage, calculation, and efficient communication practices. That is supplied by technological innovations such as edge devices that motivate and enhance movement, safety issues, safety, smaller-scale delay, and network bandwidth. [
Cloud computing is actively supporting the development of program guidelines, for instance, via IoT systems, fog nodes, cloud technologies, and big data management and architecture [
The main contribution of the proposed work is given below: Integrating HealthFog and Optimized Cascaded Convolution Neural Network provides an efficient smart diagnosis of heart disease. The proposed method is deployed using the FogBus structure which includes IoT-Edge-Computing devices for real-time examination. This optimization aids in increasing accuracy while lowering error rates. It uses Ensemble-based deep learning models for solving a binary classification issue.
The rest of our research article is written as follows: Section 2 discusses the related work on HealthFog, ensemble models, fog computing, and deep learning models. Section 3 shows the general working methodology of the proposed work. Section 4 evaluates the implementation and results of the proposed method. Section 5 concludes the work and discusses the result evaluation.
The fog computing structure is a new technology for efficiently accessing medical information from a multitude of IoT devices. While edge devices are nearer to IoT devices than cloud computing environments, fog computing can process data from cardiac patients at edge devices or fog nodes using enormous processing capacity, access speed, reaction time, and latency. Fog computing forms a key concept for the proper organization of healthcare data in the medical field. It can be acquired via various IoT-enabled devices. Edge computing-enabled networks are significantly closer to IoT-enabled devices than cloud-based data centers. So, fog computing compatible devices or fog nodes for measuring gauze can deal with heart patients’ data, drastically lowering delay, latency, or speed of response.
For collecting the medical information from various cardiac patients, the author [
In [
The author [
To study the possibility of implementing the Convolutional Neural Network (CNN) dependent classifier model as an instance of deep learning approaches, the researcher presented a Hierarchical Edge-based deep learning (HEDL) based healthcare IoT platform.
The author [
FogBus [
Aneka [
Models include the Bag of Jobs, Shared Threads, MapReduce, and Parameters Sweeps. For job distribution among cloud VMs in HealthFog and FETCH, we were using the Bag of Tasks paradigm. FogBus is used to capture fog services, and Aneka is used to capture the public cloud in HealthFog-FE [
The Health Fog-CCNN framework is based on IoT, fog-enabled cloud-based software for health data. It successfully manages the data from heart patients and analyzes their medical conditions to assess the extent of heart disease. Utilizing software applications, Health Fog-CCNN combines a variety of physical devices, allowing for a systematic and flawless end-to-end connection of Edge-Fog-Cloud for quick and precise findings transmission.
Initially, the data is collected, and then pre-processing is done by Linear Discriminant Analysis. Then the features are extracted and optimized using Galactic Swarm Optimization (GSO). The optimized features are given into the Health Fog framework for diagnosing heart disease patients. It uses ensemble-based deep learning in edge computing devices, which automatically monitors real-life health networks such as heart disease analysis. Finally, the classifiers such as bagging, boosting, XBoost, MLP, and PART are used for classifying the data. Then the majority voting classifier predicts the result.
The proposed HealthFog-CCNN contains the following hardware equipment. They are Body Area Sensor, Gateway used for Application, and FogBus framework.
This category is defined by three kinds of sensors: medical sensors, activities sensors, and environmental sensors. This section can identify the information of heart patients and send it to the appropriate route devices.
The suggested system uses three distinct kinds of gadgets for the application gateway, including smartphones, computers, and pads, which act as fog-enabled tools to gather data from various sensors and pass it to the Brokerage node for further analysis.
A FogBus framework consists of three parts, nodes used for the worker, broker, and cloud-based data center.
This section gets the work requirement or possible input data from gateway devices. The labor demands are received by the solicitations input section from gateway devices which does not stay long until data is sent. The assertions modules (a component of the task scheduler in the delegation node) collect all the stakes information management from the specialized networks and decide which nodes or sub-node to transmit to the enterprises in stages.
This section focuses on completing the initiatives authorized by the resource development of the broker’s nodes. Gadgets and single-board processors (SBCs) like the Raspberry Pi processor might be mounted on the worker’s nodes. Worker nodes in the HealthFog CCNN can use new in-depth learning approaches to evaluate data and develop and measure outcomes. Different elements for processing data, information extraction and mining, Big Data analytics, and storage can be added to the Worker node. The Worker nodes receive input information directly from the Gateway equipment, produce outputs, and exchange them with the Gateway gadgets. The Broker node in the health-Fog model can also act as a Worker component.
Whenever the fog computing framework gets congested, the availability of services suffers as a result of the delay, or the bulk of the data is substantially bigger than typical, HealthFog-CCNN enterprises turn to the cloud-based data center for help.
The HealthFog-CCNN method consists of the following process. After collecting the data from the patients, the data is pre-processed. Then the Linear Discriminant Analysis is used for filtering.
Pre-processing the information is the first stage once it is entered. This involves employing data analytics tools to filter information. The main goal of retrieving key elements of information extracted features that impact the health condition of the patient, the filtered data is limited to a shorter length utilizing Linear Discriminant Analysis employing Set Partitioning in Hierarchical Trees (SPIHT) algorithm and encrypted using Singular Value Decomposition (SVD) method. It effectively makes a judgment extracted from different information, which prescribes medications and appropriate check-ups based on the process of training examples of health professionals and physicians and keeps it in a database for re-training as necessary.
Linear discriminant analysis is a reliable classification technique that may also be used for data presentation and dimension reduction. This is a supervised machine learning technique that calculates boundary that improves the distinction among several categories employed, unlike Principal Component Analysis (PCA), which aims to maximize variation.
By maximizing intervals among predicted averages and limiting predicted variation, it attempts to divide various classes. These optimization techniques are integrated into single criterion functions that can be stated as follows for binary classification:
Here
In LDA, all K categories are considered to have the same covariance. We may generate the following discriminant function for the kth class using this assertion:
Classes are separated as much as feasible from one another, and characteristics inside a class are kept as close together as practicable. The dissociation capability of converted dimensions is used to rank them. The maximum number of items must be one less than that of the number of categorization categories. As a result, because this was binary classifier research, we simply used the very first linear discriminant.
For feature extraction it uses min and max means, standard deviation, kurtosis, and skewness. It helps to solve the over fitting issues and number of repetitive data.
“Attaching the input data and dividing it by the total of data values” yields the mean for a dataset.
All of the sets of numbers are written as Yd, while the total number of values is written as d. The lowest value in the information is known to as the mean of the upper and lower bounds, and the biggest value in the dataset is known to as the mean of the upper and lower bounds, correspondingly.
This is “the sum of the squares of variability calculated by calculating every information point’s dispersion from the mean.” The standard deviation formula is found in
As defined in “measurements of the data are heavy-tailed or light-tailed respect to normally distributed, “it is” statistical measures of how close are heavy-tailed or light-tailed compared to a normally distributed.”
This “corresponds to a distortion or asymmetrical in a data set that differs from symmetrical bell-shaped curve, or normal distribution,” as defined in
Workload management and arbitration element are the two components that make up this system. For processing data, the workload management keeps track of job requests and task queues. It also manages huge amounts of data that must be analyzed. The Arbitration component allocates the fog or cloud services that have been supplied for the execution of tasks that have been prioritized and managed by workload management. The Arbitration modules is located in the Broker node and determines whether Fog computing node, the Broker, the Fog worker node, or the Cloud Data Center, must be given the information to acquire the outcomes.
The main goal is to split the tasks into various resources in order to handle the stacks and provide optimal performance. Users of HealthFog CCNN have the ability to customize arbitral projections based on their own loading balance and program needs.
This system utilizes a majority voting approach to determine the yielding category which would be needed if the patient has a cardiac disease, and it is combined with a deep learning model to boost the predicted results from multiple models. The majority vote basis accumulates the learners’ outcomes in the same way that the weighted average does. The majority voting base, on the other hand, collects the learners’ votes and forecasts the ending labeling as the label with more votes, rather than calculating the mean of the probabilities outcomes.
Since the quantity of majority votes surpasses the effects, the majority vote is less skewed toward the defined point learner’s outcomes than the weighted average. In the ensemble model, meanwhile, event domination is caused by the majority of the same weak learners or relying on base learning to select a specific event. The sum of weak learners (Wl) is obtained as of
This component is capable of propagating data and providing findings from many other nodes of workers in the FogBus hub, which would be delivered to the assignments.
The HealthFog-CCNN elements previously explained transmit a considerable quantity of information, knowledge, and control signals. This is expected to enable this steady network connectivity. Furthermore, the transmission must be consistent and fault-tolerant. With all of this in mind, the elements are organized in the topology depicted in
The HealthFog-CCNN device’s system design is based on a master-slave structure, with each broker acting as a worker to assess and manage the workload. Within cases of nonscenario, the gateway worker/broker delivers work to the example input data for the probes to execute, during which time the preprocessing, prediction model, and computed results are sent back to the gateway workers. Because the gateway gadgets might connect to the virtual private network (VPN) in the terms of cloud sending, the data is sent to a node of brokers, who then transfer the data collected to the CDC. It also ensures that hostile components and programmers can’t access the IoT sensors and gateway devices because they can’t communicate to the web-based system; instead, just local area networks (LANs) can communicate to other nodes. The duration of the operation is reliant upon both broker’s nodes as well as the CDC obtaining increased connection expense and much less lazy owing to the latencies delay because the cloud-based platform can access a vast range of resources. When you get to this point, the organization is more powerful, and the data it receives is more accurate. Any residual edge is forwarded to the broker/worker nodes and the specialized node picks up a big amount of data transmitted via the bagging procedure. All edge computing is included in the HealthFog-CCNN system. Enabled devices, such as gateway devices and network nodes operating as a broker and sharing a LAN as the resource for learning. The software section broker node houses the manager. As a result, devices for broker job demands have emerged as a doorway. In the form of workers clouds that receives work requests. The resource manager’s discretion result is as follows: collected through the use of an access point that delivers data on where the info would have been sent. There are three possibilities here: Data to a worker node is handled by a broker. Additional worker node for data transfer. The CDC (Cloud-based Data Center) is used to process information.
The default gateway might deliver data right to the worker’s node or indeed the broker, based on the circumstances. Unless the agency seems to have enough connections and the workers nodes are overloaded can the agent enable the specified administrator to transmit information. If data must be transferred to the cloud, it has to go through the broker’s node because gateways are unable to connect to a VPN with a cloud-based virtual environment.
For predicting heart disorders, the suggested model employs the CCNN [
The entropy losses are used to represent the cascaded CNN architecture and the layers in the cascaded networks are given a threshold value. The input, which includes data characteristics, is first sent to CNN’s convolution layer, which is then transmitted to the max pooling.
Hidden neurons (HNe) are really the CCNN’s characteristics, and every layer’s activation function (AF) can be set. The hidden neuron range is allocated between 5 and 255. The limitation of AF is provided As compared to other means, Rectified Linear Unit (ReLU) has more advantages since it does not stimulate complete neurons at the same moment.
The objective of this project’s intelligent health system using the GSO method is to reduce the Mean Square Error (MSE) between projected and real results, as shown in
It can be used for “finding the mean of the squares of the mistakes, which is the mean squared variance between the projected results Prq and the actual results Acq,” according to the specification.
The entire quantity of characteristics is referred to as q. Thus, the reduction of error leads to a higher predictions rate for the automated health system with IoT-assisted Healthfog-edge computing.
The GSO [
Moreover, the problem of converging to a local optimal solution is addressed during in the discovery phase, increasing the speed convergence speed than other previous techniques.
In this HealthFog-FE method, deep learning-based ensemble classifiers were used for binary classification issues. It uses Bagging, Multilayer Perceptron, Boosting, Partial Decision tree and Gradient Boosted Decision tree (XGboost) for classifying the data. The method is initially trained using the Cleveland Dataset’s cardiac patient information and the associated known output class, and then utilized to forecast actual data inputs outputs, as illustrated in
Python was used to develop the pre-processing and ensemble deep learning elements. The pre-processing component normalizes the information using the dataset’s min and max fields parameters, as well as their distributions. SciKit learns Library was used in the ensemble deep training module [
Input layer’s size | 13 |
Output layer’s size | 2 |
No of hidden layers | 3 |
Layers descriptions | Fully connected (FC) layer with 20 nodes, FC layer with 20 nodes and 10 nodes. |
Optimizer used | Adam |
Activation function | ReLU |
We developed and installed the proposed HealthFog-CCNN model on real Fog architecture of gadgets by using FogBus platform [
Gateway device | Samsung galaxy S7 with android 9 |
Broker/Master Node | Dell XPS 13 with Intel(R) |
Worker node | Raspberry Pi 3B+, RISC Machine (ARM) Cortex-A53 |
Public cloud | Microsoft azure B1 s machine, 1vCPU, |
We used data from cardiac patients to predict the existence of heart problems in the person, that is an integer number of 0 (no presence) or 1 (presence) (presence). The trials are conducted using the Clevel and database [
It uses 7:2:1 ratio for evaluation. In the dataset 70% are used for training, 20% are used for testing and 10% are used for validation. The evaluation metrics such as accuracy, execution time, energy consumption, jitter, latency and network bandwidth. The classifiers are used for evaluation. In
Classifiers | Accuracy (%) | Energy consumption (J) | Latency |
Bandwidth (Mbps) |
---|---|---|---|---|
Bagging | 4.20 | 920 | 756.78 | 880 |
PART | 3.65 | 635 | 475.30 | 858 |
MLP | 7.85 | 520 | 978.25 | 950 |
Xboost | 4.12 | 810 | 815.20 | 901 |
Boosting | 5.78 | 845 | 420.65 | 876 |
In
For HealthFog-CCNN, the MLP classifiers achieve better result. Other environments like cloud setup, it takes very high energy consumption. In
In
The health care system as a service is a massive undertaking. In this work, HealthFog is integrated with the Optimized Cascaded Convolution Neural Network framework for diagnosing heart disease. We propose a new Fog-based Smart Healthcare System for Automatic Diagnosis of Heart Diseases utilizing deep learning and IoT dubbed HealthFog-CCNN. This study effort focuses solely on the medical elements for patients with heart disease. HealthFog-CCNN gives service as a fog service and effectively organizes information from numerous IoT devices for heart patients. HealthFog-CCNN incorporates deep learning into Edge computing devices and uses it to analyze Heart Problems in real-time. Previous research on these Heart Patient analyses did not need to use deep learning and had poor predictive performance, rendering them ineffective in real-world situations. Deep learning-based methods that demand a lot of computational resources (CPU and GPU) for both classification and prediction require a lot of computing resources (CPU and GPU). Employing unique communications and modeling distribution strategies like ensembling, this study enabled complicated deep learning networks to be incorporated in Edge computing paradigms, allowing for good accuracy with really low latencies. Training the neural network on a common dataset and creating a functioning system that gives a real-time predictive performance, was also proven for real-life heart patient information processing. In the proposed method we use five different classifiers for diagnosing heart patients. Compared with all the classifiers, MLP outperforms better in all the parameter metrics such as energy usage, communication bandwidth, latency, accuracy, and processing time. In this, HealthFog-CCNN efficiently predicts the heart disease. In future the deep learning can be used for improving the performance of the metrics.