Open Access
ARTICLE
Task Offloading and Resource Allocation in IoT Based Mobile Edge Computing Using Deep Learning
1 Dean of the Faculty of Economics, Department of Management and Marketing, Faculty of Economics, Urgench State University, Urganch, 220100, Uzbekistan
2 Basic Department Financial Control, Analysis and Audit of Moscow Main Control Department, Plekhanov Russian University of Economics, Moscow, 117997, Russia
3 Department of Computer Science and Engineering, KL Deemed to University, Vaddeswaram, Guntur, Andhra Pradesh, India
4 Department of Computer Science and Engineering, GMR Institute of Technology, Andhra Pradesh, Rajam, India
5 Department of Applied Data Science, Noroff University College, Kristiansand, Norway
6 Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, 346, United Arab Emirates
7 Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
8 Department of Software, Kongju National University, Cheonan, 31080, Korea
* Corresponding Author: Jungeun Kim. Email:
Computers, Materials & Continua 2023, 76(2), 1463-1477. https://doi.org/10.32604/cmc.2023.038417
Received 12 December 2022; Accepted 16 March 2023; Issue published 30 August 2023
Abstract
Recently, computation offloading has become an effective method for overcoming the constraint of a mobile device (MD) using computation-intensive mobile and offloading delay-sensitive application tasks to the remote cloud-based data center. Smart city benefitted from offloading to edge point. Consider a mobile edge computing (MEC) network in multiple regions. They comprise N MDs and many access points, in which every MD has M independent real-time tasks. This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization (TORA-DLSGO) algorithm. The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server, which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources. The TORA-DLSGO technique uses the deep belief network (DBN) model for optimum offloading decision-making. Finally, the SGO algorithm is used for the parameter tuning of the DBN model. The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967.Keywords
The Internet of Things (IoT) has become gradually substantial in daily life as information technology has advanced. The interrelated device collects and exchanges various information among them via a current transmission network interconnected through different IoT nodes [1,2]. A variety of IoT applications might offer users fine-grained and highly precise network facilities. In such cases, the IoT technique is interconnecting a growing number of devices and sensors, which might produce massive quantities of information that may need additional processing, which offer intellect to the service providers and customers [3]. All information should be uploaded to a central server in typical cloud computing (CC), and the outcomes should be transferred back to the devices and sensors after processing [4,5]. Such a technique places various constraints on the network, particularly based on the resource and bandwidth costs for the information broadcast [6]. Fig. 1 depicts the overview of task offloading in the mobile edge computing (MEC) method.
Moreover, once the information quantity is superior, the network’s performance will deteriorate [7]. MEC offers storage and computation functions at the edge network closer to the client. Compared to conventional CC, MEC decreases the communication latency of the user; however, it mitigates the load and congestion of the central network [8,9]. As a key technique in MEC, computational offloading could efficiently alleviate the mismatch between the communication and computing abilities of task load and terminal devices [10].
Even though MEC offloading could efficiently increase the efficiency of the wireless network [11], it could not meet the service requirement of each device because of the constraints of computational resources of the MEC server in certain hotspots [12]. The CPU utilization of mobile devices is idle or lower, resulting in inefficient usage and wasted resources to the specific range. Device-to-device (D2D) transmission is a novel technique, which enables the terminal device to directly interact via shared resources controlled by the base station (BS) [13,14]. D2D transmission might decrease BS and transmission delay; however, it saves energy consumption and expands the transmission range. Based on this, researcher workers incorporated D2D with the MEC scheme to improve the efficiency of the user offloading through the suitable computational offloading technique [15].
Tan et al. [16] examined the allocation of communication resources, offloading decisions, relationship decisions, and allocation of computational resource problems in C-MEC. The delay-sensitive task of the user is locally calculated and offloaded to MEC servers or collaborative devices. The objective is to minimize the mobile user’s overall power utilization under the delay constraints. The problems are expressed by the mixed-integer nonlinear programming (MINLP) that includes the allocation of computational resources, the joint optimization of task offloading decision, relationship decision, and subcarriers and energy allocation. Deng et al. [17] proposed the autonomous partial offloading scheme for the delay-sensitive computational task in multiple-user industrial IoT (IIoT) MEC systems. The primary objective is to offer offloading facilities with minimal delay. Particularly, the Q-learning mechanism is implemented for providing a discrete partial offloading decision.
Chen et al. [18] proposed a two-stage alternative technique based on sequential quadratic programming (SQP) and deep reinforcement learning (DRL). In the upper level, provided the assigned CPU frequency and broadcast power, the cache and task offloading decision problems are resolved using the deep Q-network (DQN) model. In the lower level, CPU frequency distribution and the optimum communication power with the cache and offloading decisions are attained through the SQP method. Zhu et al. [19] examined task offloading and joint cloudlet deployment problems to diminish the task response delay, user power utilization, and the number of deployed cloudlets.
Dai et al. [20] recommended an effectual offloading structure related to DRL for MEC with edge-CC collaboration. In contrast, the computation-intensive task is locally offloaded or implemented to the cloud server. By conjointly bearing in mind: i) the dynamic edge-CC platform; ii) faster offloading decision, we influence DRL to minimalize the processing delay of tasks by efficiently incorporating the cloud server, the computational resources of vehicles, and edge servers (ESs). Particularly, a DQN is exploited to learn optimum offloading schemes adaptively. In [21], to minimalize the overhead of fog computing networks, involving the energy consumption and task process delay, when preserving the quality of service (QoS) requirement of distinct kinds of ID, the study proposed a QoS-aware resource allocation technique that cooperatively considers the relationship between IDs and fog nodes (FNs), computation resource allocation and communication to enhance the offloading decision when reducing the network overhead.
This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization (TORA-DLSGO) algorithm. The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server, which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources. The TORA-DLSGO technique uses the deep belief network (DBN) model for optimum offloading decision-making. Finally, the SGO algorithm is used for the parameter tuning of the DBN model. The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems.
The rest of the paper is organized as follows: Section 2 introduces the proposed model and Section 3 offers the experimental validation. Finally, Section 4 concludes the study.
In this study, we have developed a new TORA-DLSGO algorithm for task offloading and resource allocation in the IoT-based MEC environment. The proposed TORA-DLSGO technique addressed the resource management issue in the MEC server, which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.
MES is based on the telecommunication substructure, namely Long-Term Evolution (LTE) or BS. The mobile drones (MDs) such as drones, smartphones, robots, and tablets are connected to the edge computing controls at the LTE or BS in the nearby region (position) to the computational offloading. Consider that the MEC network (MECN) exists in multiple regions comprising multiple
The transmission bandwidth amongst MDs and offloading positions are represented as
Consider that every MD has M independent huge real-time task that is remotely implemented in the MEC network or locally implemented in the MD by the computational offloading [22]. Consequently, a task could not be separated into subtasks to be processed in several devices. Task size is represented as
In the presented TORA-DLSGO technique, the DBN model is used for optimum offloading decision-making purposes. The DBN is encompassed of restricted Boltzmann machine (RBM) and backpropagation neural network (BPNN) [23]. The training model was detached into pre-training and finetuning. The former train the RBM in unsupervised means, whereas the later fine-tunes the bias and weight parameters of the pretrained method via executing the BP technique. The DBN is mainly encompassed by
From the expression, I and J correspondingly indicate the number of nodes in the visible and hidden neurons,
From the expression:
Now,
Now, N shows the number of instances. Furthermore, the random gradient rise method is exploited to calculate the derivation of the parameter of the highest probability function, including the bias and weight of every node in the following:
Finally, the SGO algorithm is used for the parameter tuning of the DBN model. SGO was based on the migration and attacking performance of seagulls [24]. The mathematical process of attacking and migrating the prey was determined in the following. The migration system stimulates the set of seagulls that move in every place. At this point, the seagulls can accomplish 3 conditions: To avoid collision betwixt neighbors (that is, other seagulls), an added parameter A was utilized to assess a novel search position.
In which x denotes the existing iteration in the subsequent,
Wherein
Assume
During this formula,
Consider
wherein r defines the radius of all the turns of spirals, k distinguishes the arbitrary value in
At this point,
The mathematical modelling is used to minimize the energy utilization of every
Now
The experimental study of the TORA-DLSGO model is investigated in detail. Table 1 and Fig. 3 report an overall convergence rate (CR) examination of the TORA-DLSGO model with other models such as DQN, Deep Deterministic Policy Gradient (DDPG), and fuzzy logic based DDPG (FL-DDPG) on distinct episodes. The resultant values indicate that the TORA-DLSGO model reaches optimal CR values under each episode.
Fig. 4 represents the reward analysis of the TORA-DLSGO model under different aggregative intervals. The results highlighted that the TORA-DLSGO model had obtained improved performance with effective reward values. In addition, the TORA-DLSGO model has accomplished increasing reward at 30000 aggregation intervals.
Table 2 and Fig. 5 exhibits a comparative reward study of the TORA-DLSGO model with other existing models [25]. The results portrayed the improvements of the TORA-DLSGO model with other models under all bandwidth (BW). For instance, with BW of 5, the TORA-DLSGO model reaches an improving reward of 0.8196, whereas the random, greedy, DQN, DDPG, and FL-DDPG models attain degrading reward values of 0.5604, 0.5616, 0.7001, 0.7515, and 0.7623, respectively. At the same time, with BW of 10, the TORA-DLSGO technique reaches an improving reward of 0.9832 while the random, greedy, DQN, DDPG, and FL-DDPG models attain degrading reward values of 0.5807, 0.9247, 0.9438, 0.9605, and 0.9677 correspondingly. At the same time, with BW of 5, the TORA-DLSGO approach reaches an improving reward of 0.9952, whereas the random, greedy, DQN, DDPG, and FL-DDPG methods attain degrading reward values of 0.5819, 0.9856, 0.9880, 0.9904, and 0.9952 correspondingly.
A brief set of energy consumption (ECM) inspection of the TORA-DLSGO model is examined under distinct BW values in Table 3 and Fig. 6. The obtained results pointed out the enhanced efficacy of the TORA-DLSGO model under each BW value. For instance, with BW of 5, the TORA-DLSGO model results in minimalized ECM of 2.0389, whereas the random, greedy, DQN, DDPG, and FL-DDPG models reach maximized ECM of 5.8015, 5.8015, 3.6821, 3.0678, and 2.7914, respectively. With BW of 10, the TORA-DLSGO technique results in minimalized ECM of 0.1653 while the random, greedy, DQN, DDPG, and FL-DDPG models reach maximized ECM of 5.3561, 0.9331, 0.7488, 0.5492, and 0.3342 correspondingly. Similarly, with BW of 12, the TORA-DLSGO technique results in decreased ECM of 0.0578, whereas the random, greedy, DQN, DDPG, and FL-DDPG models reach maximized ECM of 5.4636, 0.3956, 0.3035, 0.2267, and 0.1653 correspondingly.
Table 4 and Fig. 7 show a comparative analysis of the TORA-DLSGO technique with other existing models. The results portrayed the improvements of the TORA-DLSGO approach with other models under all delay thresholds (DET). For instance, with a DET of 80, the TORA-DLSGO technique achieves an improving reward of 0.5862. At the same time, the random, greedy, DQN, DDPG, and FL-DDPG models attain degrading reward values of 0.4178, 0.4913, 0.4887, 0.4991, and 0.5519 correspondingly. Simultaneously, with DET of 90, the TORA-DLSGO approach attains an improving reward of 0.8352 while the random, greedy, DQN, DDPG, and FL-DDPG systems accomplish degrading reward values of 0.4900, 0.5842, 0.6190, 0.6667, and 0.8111 correspondingly. Concurrently, with DET of 100, the TORA-DLSGO model attains an improving reward of 0.8967, whereas the random, greedy, DQN, DDPG, and FL-DDPG methods attain degrading reward values of 0.5661, 0.6976, 0.7389, 0.8034, and 0.8743 correspondingly.
A brief set of delay (DEL) inspections of the TORA-DLSGO approach is inspected under distinct IoT index values in Table 5 and Fig. 8. The attained outcomes pointed out the superior efficiency of the TORA-DLSGO technique under each IoT index value. For example, with an IoT index of 1, the TORA-DLSGO approach results in minimalized DEL of 0.0434. At the same time, the random, greedy, DQN, DDPG, and FL-DDPG methods attain the highest DEL of 0.0992, 0.0677, 0.0759, 0.0552, and 0.0853 correspondingly. Along with that, with an IoT index of 5, the TORA-DLSGO method results in decreased DEL of 0.0227, whereas the random, greedy, DQN, DDPG, and FL-DDPG models reach the highest DEL of 0.0316, 0.0989, 0.0874, 0.0722, and 0.0675 correspondingly. Similarly, with an IoT index of 9, the TORA-DLSGO model results in a reduced DEL of 0.0230. In contrast, the random, greedy, DQN, DDPG, and FL-DDPG approaches obtain maximized DEL of 0.0992, 0.0997, 0.0798, 0.0811, and 0.0709 correspondingly.
In this study, we have developed a new TORA-DLSGO algorithm for task offloading and resource allocation in the IoT-based MEC environment. The proposed TORA-DLSGO technique addressed the resource management issue in the MEC server which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on the minimization of energy consumption subject to the latency requirements and restricted resources. In the presented TORA-DLSGO technique, the DBN model is used for optimum offloading decision-making purposes. Finally, the SGO algorithm is used for the parameter tuning of the DBN model. The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in the reduction of client overhead in the MEC systems. In future, metaheuristic-based task scheduling schemes can be designed to optimize makespan in the IoT-based MEC environment.
Funding Statement: This research was supported by the Technology Development Program of MSS (No. S3033853).
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
References
1. S. Dong, Y. Xia and J. Kamruzzaman, “Quantum particle swarm optimization for task offloading in mobile edge computing,” IEEE Transactions on Industrial Informatics, 2022. https://doi.org/10.1109/TII.2022.3225313 [Google Scholar] [CrossRef]
2. L. Tan, Z. Kuang, J. Gao and L. Zhao, “Energy-efficient collaborative multi-access edge computing via deep reinforcement learning,” IEEE Transactions on Industrial Informatics, 2022. https://doi.org/10.1109/TII.2022.3213603 [Google Scholar] [CrossRef]
3. L. Sun, L. Wan and X. Wang, “Learning-based resource allocation strategy for industrial IoT in UAV-enabled MEC systems,” IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 5031–5040, 2020. [Google Scholar]
4. S. Talwani, J. Singla, G. Mathur, N. Malik, N. Z. Jhanjhi et al., “Machine-learning-based approach for virtual machine allocation and migration,” Electronics, vol. 11, no. 19, pp. 3249, 2022. [Google Scholar]
5. S. K. Mishra, S. Mishra, A. Alsayat, N. Z. Jhanjhi, M. Humayun et al., “Energy-aware task allocation for multi-cloud networks,” IEEE Access, vol. 8, pp. 178825–178834, 2020. [Google Scholar]
6. G. G. Wang, M. Lu, Y. Q. Dong and X. J. Zhao, “Self-adaptive extreme learning machine,” Neural Computing and Applications, vol. 27, no. 2, pp. 291–303, 2016. [Google Scholar]
7. Y. Wang, X. Qiao and G. G. Wang, “Architecture evolution of convolutional neural network using monarch butterfly optimization,” Journal of Ambient Intelligence and Humanized Computing, 2022. https://doi.org/10.1007/s12652-022-03766-4 [Google Scholar] [CrossRef]
8. W. Fan, Z. Chen, Z. Hao, Y. Su, F. Wu et al., “DNN deployment, task offloading, and resource allocation for joint task inference in IIoT,” IEEE Transactions on Industrial Informatics, 2022. https://doi.org/10.1109/TII.2022.3192882 [Google Scholar] [CrossRef]
9. H. Lu, C. Gu, F. Luo, W. Ding, S. Zheng et al., “Optimization of task offloading strategy for mobile edge computing based on multi-agent deep reinforcement learning,” IEEE Access, vol. 8, pp. 202573–202584, 2020. [Google Scholar]
10. Y. Wang, J. Yang, X. Guo and Z. Qu, “Satellite edge computing for the internet of things in aerospace,” Sensors, vol. 19, no. 20, pp. 4375, 2019. [Google Scholar] [PubMed]
11. Z. Wang, G. Xue, S. Qian and M. Li, “CampEdge: Distributed computation offloading strategy under large-scale AP-based edge computing system for IoT applications,” IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6733–6745, 2020. [Google Scholar]
12. S. Mao, S. He and J. Wu, “Joint UAV position optimization and resource scheduling in space-air-ground integrated networks with mixed cloud-edge computing,” IEEE Systems Journal, vol. 15, no. 3, pp. 3992–4002, 2020. [Google Scholar]
13. H. Guo and J. Liu, “UAV-enhanced intelligent offloading for internet of things at the edge,” IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp. 2737–2746, 2019. [Google Scholar]
14. Y. Zhang, X. Zhou, Y. Teng, J. Fang and W. Zheng, “Resource allocation for multi-user MEC system: Machine learning approaches,” in Int. Conf. on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, pp. 794–799, 2018. [Google Scholar]
15. J. Gao, Z. Kuang, J. Gao and L. Zhao, “Joint offloading scheduling and resource allocation in vehicular edge computing: A two layer solution,” IEEE Transactions on Vehicular Technology, 2022. https://doi.org/10.1109/TVT.2022.3220571 [Google Scholar] [CrossRef]
16. L. Tan, Z. Kuang, L. Zhao and A. Liu, “Energy-efficient joint task offloading and resource allocation in OFDMA-based collaborative edge computing,” IEEE Transactions on Wireless Communications, vol. 21, no. 3, pp. 1960–1972, 2021. [Google Scholar]
17. X. Deng, J. Yin, P. Guan, N. N. Xiong, L. Zhang et al., “Intelligent delay-aware partial computing task offloading for multi-user Industrial Internet of Things through edge computing,” IEEE Internet of Things Journal, 2021. https://doi.org/10.1109/JIOT.2021.3123406 [Google Scholar] [CrossRef]
18. Q. Chen, Z. Kuang and L. Zhao, “Multiuser computation offloading and resource allocation for cloud–edge heterogeneous network,” IEEE Internet of Things Journal, vol. 9, no. 5, pp. 3799–3811, 2021. [Google Scholar]
19. X. Zhu and M. Zhou, “Multiobjective optimized cloudlet deployment and task offloading for mobile-edge computing,” IEEE Internet of Things Journal, vol. 8, no. 20, pp. 15582–15595, 2021. [Google Scholar]
20. F. Dai, G. Liu, Q. Mo, W. Xu and B. Huang, “Task offloading for vehicular edge computing with edge-cloud cooperation,” World Wide Web, pp. 1–19, 2022. https://doi.org/10.1007/s11280-022-01011-8 [Google Scholar] [CrossRef]
21. X. Huang, Y. Cui, Q. Chen and J. Zhang, “Joint task offloading and QoS-aware resource allocation in fog-enabled Internet-of-Things networks,” IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7194–7206, 2020. [Google Scholar]
22. T. Alfakih, M. M. Hassan, A. Gumaei, C. Savaglio and G. Fortino, “Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on SARSA,” IEEE Access, vol. 8, pp. 54074–54084, 2020. [Google Scholar]
23. H. Zhao, J. Liu, H. Chen, J. Chen, Y. Li et al., “Intelligent diagnosis using continuous wavelet transform and gauss convolutional deep belief network,” IEEE Transactions on Reliability, 2022. https://doi.org/10.1109/TR.2022.3180273 [Google Scholar] [CrossRef]
24. N. M. Alfaer, H. M. Aljohani, S. Abdel-Khalek, A. S. Alghamdi and R. F. Mansour, “Fusion-based deep learning with nature-inspired algorithm for intracerebral haemorrhage diagnosis,” Journal of Healthcare Engineering, vol. 2022, pp. 1–12, 2022. [Google Scholar]
25. X. Chen and G. Liu, “Federated deep reinforcement learning-based task offloading and resource allocation for smart cities in a mobile edge network,” Sensors, vol. 22, no. 13, pp. 4738, 2022. [Google Scholar] [PubMed]
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.