iconOpen Access

ARTICLE

Task Offloading and Resource Allocation in IoT Based Mobile Edge Computing Using Deep Learning

by Ilyоs Abdullaev1, Natalia Prodanova2, K. Aruna Bhaskar3, E. Laxmi Lydia4, Seifedine Kadry5,6,7, Jungeun Kim8,*

1 Dean of the Faculty of Economics, Department of Management and Marketing, Faculty of Economics, Urgench State University, Urganch, 220100, Uzbekistan
2 Basic Department Financial Control, Analysis and Audit of Moscow Main Control Department, Plekhanov Russian University of Economics, Moscow, 117997, Russia
3 Department of Computer Science and Engineering, KL Deemed to University, Vaddeswaram, Guntur, Andhra Pradesh, India
4 Department of Computer Science and Engineering, GMR Institute of Technology, Andhra Pradesh, Rajam, India
5 Department of Applied Data Science, Noroff University College, Kristiansand, Norway
6 Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, 346, United Arab Emirates
7 Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
8 Department of Software, Kongju National University, Cheonan, 31080, Korea

* Corresponding Author: Jungeun Kim. Email: email

Computers, Materials & Continua 2023, 76(2), 1463-1477. https://doi.org/10.32604/cmc.2023.038417

Abstract

Recently, computation offloading has become an effective method for overcoming the constraint of a mobile device (MD) using computation-intensive mobile and offloading delay-sensitive application tasks to the remote cloud-based data center. Smart city benefitted from offloading to edge point. Consider a mobile edge computing (MEC) network in multiple regions. They comprise N MDs and many access points, in which every MD has M independent real-time tasks. This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization (TORA-DLSGO) algorithm. The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server, which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources. The TORA-DLSGO technique uses the deep belief network (DBN) model for optimum offloading decision-making. Finally, the SGO algorithm is used for the parameter tuning of the DBN model. The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967.

Keywords


1  Introduction

The Internet of Things (IoT) has become gradually substantial in daily life as information technology has advanced. The interrelated device collects and exchanges various information among them via a current transmission network interconnected through different IoT nodes [1,2]. A variety of IoT applications might offer users fine-grained and highly precise network facilities. In such cases, the IoT technique is interconnecting a growing number of devices and sensors, which might produce massive quantities of information that may need additional processing, which offer intellect to the service providers and customers [3]. All information should be uploaded to a central server in typical cloud computing (CC), and the outcomes should be transferred back to the devices and sensors after processing [4,5]. Such a technique places various constraints on the network, particularly based on the resource and bandwidth costs for the information broadcast [6]. Fig. 1 depicts the overview of task offloading in the mobile edge computing (MEC) method.

images

Figure 1: Overview of task offloading in the MEC model

Moreover, once the information quantity is superior, the network’s performance will deteriorate [7]. MEC offers storage and computation functions at the edge network closer to the client. Compared to conventional CC, MEC decreases the communication latency of the user; however, it mitigates the load and congestion of the central network [8,9]. As a key technique in MEC, computational offloading could efficiently alleviate the mismatch between the communication and computing abilities of task load and terminal devices [10].

Even though MEC offloading could efficiently increase the efficiency of the wireless network [11], it could not meet the service requirement of each device because of the constraints of computational resources of the MEC server in certain hotspots [12]. The CPU utilization of mobile devices is idle or lower, resulting in inefficient usage and wasted resources to the specific range. Device-to-device (D2D) transmission is a novel technique, which enables the terminal device to directly interact via shared resources controlled by the base station (BS) [13,14]. D2D transmission might decrease BS and transmission delay; however, it saves energy consumption and expands the transmission range. Based on this, researcher workers incorporated D2D with the MEC scheme to improve the efficiency of the user offloading through the suitable computational offloading technique [15].

Tan et al. [16] examined the allocation of communication resources, offloading decisions, relationship decisions, and allocation of computational resource problems in C-MEC. The delay-sensitive task of the user is locally calculated and offloaded to MEC servers or collaborative devices. The objective is to minimize the mobile user’s overall power utilization under the delay constraints. The problems are expressed by the mixed-integer nonlinear programming (MINLP) that includes the allocation of computational resources, the joint optimization of task offloading decision, relationship decision, and subcarriers and energy allocation. Deng et al. [17] proposed the autonomous partial offloading scheme for the delay-sensitive computational task in multiple-user industrial IoT (IIoT) MEC systems. The primary objective is to offer offloading facilities with minimal delay. Particularly, the Q-learning mechanism is implemented for providing a discrete partial offloading decision.

Chen et al. [18] proposed a two-stage alternative technique based on sequential quadratic programming (SQP) and deep reinforcement learning (DRL). In the upper level, provided the assigned CPU frequency and broadcast power, the cache and task offloading decision problems are resolved using the deep Q-network (DQN) model. In the lower level, CPU frequency distribution and the optimum communication power with the cache and offloading decisions are attained through the SQP method. Zhu et al. [19] examined task offloading and joint cloudlet deployment problems to diminish the task response delay, user power utilization, and the number of deployed cloudlets.

Dai et al. [20] recommended an effectual offloading structure related to DRL for MEC with edge-CC collaboration. In contrast, the computation-intensive task is locally offloaded or implemented to the cloud server. By conjointly bearing in mind: i) the dynamic edge-CC platform; ii) faster offloading decision, we influence DRL to minimalize the processing delay of tasks by efficiently incorporating the cloud server, the computational resources of vehicles, and edge servers (ESs). Particularly, a DQN is exploited to learn optimum offloading schemes adaptively. In [21], to minimalize the overhead of fog computing networks, involving the energy consumption and task process delay, when preserving the quality of service (QoS) requirement of distinct kinds of ID, the study proposed a QoS-aware resource allocation technique that cooperatively considers the relationship between IDs and fog nodes (FNs), computation resource allocation and communication to enhance the offloading decision when reducing the network overhead.

This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization (TORA-DLSGO) algorithm. The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server, which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources. The TORA-DLSGO technique uses the deep belief network (DBN) model for optimum offloading decision-making. Finally, the SGO algorithm is used for the parameter tuning of the DBN model. The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems.

The rest of the paper is organized as follows: Section 2 introduces the proposed model and Section 3 offers the experimental validation. Finally, Section 4 concludes the study.

2  The Proposed Model

In this study, we have developed a new TORA-DLSGO algorithm for task offloading and resource allocation in the IoT-based MEC environment. The proposed TORA-DLSGO technique addressed the resource management issue in the MEC server, which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.

2.1 Communication Model

MES is based on the telecommunication substructure, namely Long-Term Evolution (LTE) or BS. The mobile drones (MDs) such as drones, smartphones, robots, and tablets are connected to the edge computing controls at the LTE or BS in the nearby region (position) to the computational offloading. Consider that the MEC network (MECN) exists in multiple regions comprising multiple APs, multiple ES, and n MDs represented as n={1,2, ,n}. The MD could interconnect to the MECN via a mobile network or access point. Based on a specific parameter, including response time, energy consumption, ESs’ workload, or latency, the MD must find an effective position in the network to carry out offloading. The offloading performance is characterized by A={a1, a2, , an}, based on offloading decision, whereby Xn indicate the offloading decision Xn={0,1,2,3} (local computing: Xn=0 or nearby the ES: Xn=1, neighbouring to the ES: Xn=2, remote cloud: Xn=3). The computing delay and bandwidth Bn based on processing frequency fn can impact the offloading decision.

The transmission bandwidth amongst MDs and offloading positions are represented as Be,Ba and Bc. Furthermore, the overall transmission delay for the specific MD can be represented as Tn. Furthermore, pnt indicates the power utilization for tasks, and pnr shows the receiving power utilization. Thus, based on specific variables namely energy consumption, ESs’ workload, response time, or latency, the MD need to find d an influential position (nearby the ES or nearby to the remote cloud or ES) for offloading the process. Finally, afterwards, the offloading task to the adjacent ES or nearby ES was finished, and an effective resource allocation technique was mandatory on the ES.

2.2 Task Modeling

Consider that every MD has M independent huge real-time task that is remotely implemented in the MEC network or locally implemented in the MD by the computational offloading [22]. Consequently, a task could not be separated into subtasks to be processed in several devices. Task size is represented as Dn (transmitted data size), and Rn represents the computational resource needed to serve these tasks (CPU cycle number). Thus, Dn and Rn are positively related as Rn=θDn, θ constant. The task is implemented on the MEC network or locally implemented on the MD, and Dn remains unchanged.

2.3 DBN-Based Decision Making

In the presented TORA-DLSGO technique, the DBN model is used for optimum offloading decision-making purposes. The DBN is encompassed of restricted Boltzmann machine (RBM) and backpropagation neural network (BPNN) [23]. The training model was detached into pre-training and finetuning. The former train the RBM in unsupervised means, whereas the later fine-tunes the bias and weight parameters of the pretrained method via executing the BP technique. The DBN is mainly encompassed by RBM and is derived from the energy function of thermodynamics. Fig. 2 illustrates the infrastructure of DBN. Furthermore, RBM is composed of the upper layer as a hidden layer and the lower layer as a visible layer. The energy model of RBM can be formulated using the following expression:

E(v,h|θ)=i=1Iaivjj=1Jbjhji=1jI=1Jwijvihj(1)

images

Figure 2: Architecture of DBN

From the expression, I and J correspondingly indicate the number of nodes in the visible and hidden neurons, v=(v1,v2,vi) and h=(h1,h2,hj) correspondingly refer to the state of visible and hidden neurons, θ={wij,ai,bj} specifies the module parameter of RBM, wij denotes the weight of i-th and j-th nodes of the visible and hidden neurons, ai specifies the bias of i-th nodes of visible state. Bj indicates the bias of j-th neurons of the hidden layer. The joint possibility distribution can be obtained according to the energy function of RBM:

p(v,h|θ)=eE(v,h|θ)Z(θ)(2)

From the expression: Z(θ)=v,heE(v,h|θ) shows the mean normalization factor. The likelihood of hj hidden neurons and vi the visible neurons are activated by:

P(hj|v)=σ(bj+i=1IWijvi)(3)

P(vi|h)=σ(aj+j=1JWijhj)(4)

Now, σ() represent the activation function. To enable the neural network (NN) for resolving complex problems, it is important to add a nonlinear factor to the NN using the activation function. When training the RBM, the highest probability obtains the parameter set  θ={wij,ai,bj}, and the trained dataset is exploited for RBM training to accomplish the highest possible function is formulated below:

L(θ)=l=1NL(v|θ)=l=1Np(v)(5)

Now, N shows the number of instances. Furthermore, the random gradient rise method is exploited to calculate the derivation of the parameter of the highest probability function, including the bias and weight of every node in the following:

lnLwij=l=1N[P(hj=1|vl)vilvP(v)P(hj=1|v)vi](6)

lnLai=l=1N[vilvP(v)vi](7)

lnLbj=l=1N[P(hj=1|vl)vP(v)P(hi=1|v)](8)

2.4 Parameter Optimization

Finally, the SGO algorithm is used for the parameter tuning of the DBN model. SGO was based on the migration and attacking performance of seagulls [24]. The mathematical process of attacking and migrating the prey was determined in the following. The migration system stimulates the set of seagulls that move in every place. At this point, the seagulls can accomplish 3 conditions: To avoid collision betwixt neighbors (that is, other seagulls), an added parameter A was utilized to assess a novel search position.

Cs=A×Ps(x)(9)

In which x denotes the existing iteration in the subsequent, Cs implies the search agent position which could not collide with another searching agent, Ps denotes the existing search agent position.

A=fc(x×(fc/Maxiterαtion))(10)

where: x=0, 1, 2, Maxiteration

Wherein fc represents the control of the frequency of A using slowly minimizing in fc to 0. At this point, the value of fc is set to 2. Then avoiding the collision betwixt neighbors, the searching agent moves to a better neighbor direction.

Ms=B×(Pbs(x)Ps(x))(11)

Assume Ms be the search agent and place Ps to better adequate search agent Pbs (viz., appropriate seagulls). The performance of B was assigned arbitrarily using responsible to appropriate balancing betwixt exploitation as well as exploration. B can be measured as:

B=2×A2×rd(12)

During this formula, rd  exemplifies an arbitrary integer that lies between zero and one. Eventually, the search agent upgrades their place nearby to a better search agent.

Ds=|Cs+Ms|(13)

Consider Ds to be the distance betwixt the better appropriate searching agent and searching agent (viz., better seagull which fitness value is least). The exploitation step concentrates on utilizing the experience and history of the searching method. In prey attacking, the spiral performance occurs in the air. Seagulls can change the angle and attack speed from the migration. It can be remembered as altitude with wing and weight. The x, y, and z planes can be provided in the subsequent.

x=r×cos(k)(14)

y=r×sin(k)(15)

z=r×k(16)

r=u×ekv(17)

wherein r defines the radius of all the turns of spirals, k distinguishes the arbitrary value in [0k2A]. u and v imply the constant to define the spiral shape, and e symbolizes the base of the natural logarithm. It could be measured in the subsequent formula:

Ps(x)=(Ds×x×y×z)+Pbs(x)(18)

At this point, Ps(x) supplies the optimum result and upgrades another searching agent’s place. Based on the abovementioned communication and computation techniques, the energy consumption and the overall time consumption are evaluated as follows:

tn=αn(tnup+tnMEC)+(1αn)tnIoT(19)

en=αnenup+(1αn)enIoT(20)

The mathematical modelling is used to minimize the energy utilization of every IoT device subjected towards the limited resources and the latency requirement:

minBnfnαnΣn=1Nen s.t.

(c1)tnT

(c2)n=1NBnB(21)

(c3)n=1NfnFMEC

(c4)αn{0,1}

Now FMEC shows the overall computation resources of the MEC server [25]. For these constraints, constraint (c1) shows that the implementation time of IoT devices n could not exceed the delay threshold to guarantee the QoE. Considering that the implementation time of IoT tasks is within the delay threshold, a satisfactory user experience could be attained. For instance, in the community access control technique, when the delay threshold of face detection techniques is 0.1 s, the user experience is fulfilled if the face detection is finished within 0.1 s. Users have similar QoE to complete face detection within 0.1 and 0.01 s; it is optional to pursue a low processing time that is worthless in the real-time scene. Constraint (c2) shows that the number of assigned sub-bandwidth could not surpass the overall bandwidth of BS. Constraint (c3) represents that the computational resource assigned to every IoT device through the MEC server could not surpass the overall computational resource of the MEC server. Constraint (c4) represents that IoT device tasks are processed on the MEC server or IoT device n. When αn=0, the task of IoT devices would be processed on IoT devices. When αn=1, the task of IoT devices would be offloaded to the MEC server.

3  Results and Discussion

The experimental study of the TORA-DLSGO model is investigated in detail. Table 1 and Fig. 3 report an overall convergence rate (CR) examination of the TORA-DLSGO model with other models such as DQN, Deep Deterministic Policy Gradient (DDPG), and fuzzy logic based DDPG (FL-DDPG) on distinct episodes. The resultant values indicate that the TORA-DLSGO model reaches optimal CR values under each episode.

images

images

Figure 3: CR analysis of TORA-DLSGO approach under distinct episodes

Fig. 4 represents the reward analysis of the TORA-DLSGO model under different aggregative intervals. The results highlighted that the TORA-DLSGO model had obtained improved performance with effective reward values. In addition, the TORA-DLSGO model has accomplished increasing reward at 30000 aggregation intervals.

images

Figure 4: Reward analysis of TORA-DLSGO approach under different aggregative interval

Table 2 and Fig. 5 exhibits a comparative reward study of the TORA-DLSGO model with other existing models [25]. The results portrayed the improvements of the TORA-DLSGO model with other models under all bandwidth (BW). For instance, with BW of 5, the TORA-DLSGO model reaches an improving reward of 0.8196, whereas the random, greedy, DQN, DDPG, and FL-DDPG models attain degrading reward values of 0.5604, 0.5616, 0.7001, 0.7515, and 0.7623, respectively. At the same time, with BW of 10, the TORA-DLSGO technique reaches an improving reward of 0.9832 while the random, greedy, DQN, DDPG, and FL-DDPG models attain degrading reward values of 0.5807, 0.9247, 0.9438, 0.9605, and 0.9677 correspondingly. At the same time, with BW of 5, the TORA-DLSGO approach reaches an improving reward of 0.9952, whereas the random, greedy, DQN, DDPG, and FL-DDPG methods attain degrading reward values of 0.5819, 0.9856, 0.9880, 0.9904, and 0.9952 correspondingly.

images

images

Figure 5: Reward analysis of TORA-DLSGO methodology under distinct bandwidth

A brief set of energy consumption (ECM) inspection of the TORA-DLSGO model is examined under distinct BW values in Table 3 and Fig. 6. The obtained results pointed out the enhanced efficacy of the TORA-DLSGO model under each BW value. For instance, with BW of 5, the TORA-DLSGO model results in minimalized ECM of 2.0389, whereas the random, greedy, DQN, DDPG, and FL-DDPG models reach maximized ECM of 5.8015, 5.8015, 3.6821, 3.0678, and 2.7914, respectively. With BW of 10, the TORA-DLSGO technique results in minimalized ECM of 0.1653 while the random, greedy, DQN, DDPG, and FL-DDPG models reach maximized ECM of 5.3561, 0.9331, 0.7488, 0.5492, and 0.3342 correspondingly. Similarly, with BW of 12, the TORA-DLSGO technique results in decreased ECM of 0.0578, whereas the random, greedy, DQN, DDPG, and FL-DDPG models reach maximized ECM of 5.4636, 0.3956, 0.3035, 0.2267, and 0.1653 correspondingly.

images

images

Figure 6: ECM analysis of TORA-DLSGO method under distinct bandwidth

Table 4 and Fig. 7 show a comparative analysis of the TORA-DLSGO technique with other existing models. The results portrayed the improvements of the TORA-DLSGO approach with other models under all delay thresholds (DET). For instance, with a DET of 80, the TORA-DLSGO technique achieves an improving reward of 0.5862. At the same time, the random, greedy, DQN, DDPG, and FL-DDPG models attain degrading reward values of 0.4178, 0.4913, 0.4887, 0.4991, and 0.5519 correspondingly. Simultaneously, with DET of 90, the TORA-DLSGO approach attains an improving reward of 0.8352 while the random, greedy, DQN, DDPG, and FL-DDPG systems accomplish degrading reward values of 0.4900, 0.5842, 0.6190, 0.6667, and 0.8111 correspondingly. Concurrently, with DET of 100, the TORA-DLSGO model attains an improving reward of 0.8967, whereas the random, greedy, DQN, DDPG, and FL-DDPG methods attain degrading reward values of 0.5661, 0.6976, 0.7389, 0.8034, and 0.8743 correspondingly.

images

images

Figure 7: Reward analysis of TORA-DLSGO approach under distinct delay threshold

A brief set of delay (DEL) inspections of the TORA-DLSGO approach is inspected under distinct IoT index values in Table 5 and Fig. 8. The attained outcomes pointed out the superior efficiency of the TORA-DLSGO technique under each IoT index value. For example, with an IoT index of 1, the TORA-DLSGO approach results in minimalized DEL of 0.0434. At the same time, the random, greedy, DQN, DDPG, and FL-DDPG methods attain the highest DEL of 0.0992, 0.0677, 0.0759, 0.0552, and 0.0853 correspondingly. Along with that, with an IoT index of 5, the TORA-DLSGO method results in decreased DEL of 0.0227, whereas the random, greedy, DQN, DDPG, and FL-DDPG models reach the highest DEL of 0.0316, 0.0989, 0.0874, 0.0722, and 0.0675 correspondingly. Similarly, with an IoT index of 9, the TORA-DLSGO model results in a reduced DEL of 0.0230. In contrast, the random, greedy, DQN, DDPG, and FL-DDPG approaches obtain maximized DEL of 0.0992, 0.0997, 0.0798, 0.0811, and 0.0709 correspondingly.

images

images

Figure 8: Delay analysis of TORA-DLSGO approach under distinct IoT index

4  Conclusion

In this study, we have developed a new TORA-DLSGO algorithm for task offloading and resource allocation in the IoT-based MEC environment. The proposed TORA-DLSGO technique addressed the resource management issue in the MEC server which enables an optimum offloading decision to minimize the system cost. In addition, an objective function is derived based on the minimization of energy consumption subject to the latency requirements and restricted resources. In the presented TORA-DLSGO technique, the DBN model is used for optimum offloading decision-making purposes. Finally, the SGO algorithm is used for the parameter tuning of the DBN model. The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in the reduction of client overhead in the MEC systems. In future, metaheuristic-based task scheduling schemes can be designed to optimize makespan in the IoT-based MEC environment.

Funding Statement: This research was supported by the Technology Development Program of MSS (No. S3033853).

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. S. Dong, Y. Xia and J. Kamruzzaman, “Quantum particle swarm optimization for task offloading in mobile edge computing,” IEEE Transactions on Industrial Informatics, 2022. https://doi.org/10.1109/TII.2022.3225313 [Google Scholar] [CrossRef]

2. L. Tan, Z. Kuang, J. Gao and L. Zhao, “Energy-efficient collaborative multi-access edge computing via deep reinforcement learning,” IEEE Transactions on Industrial Informatics, 2022. https://doi.org/10.1109/TII.2022.3213603 [Google Scholar] [CrossRef]

3. L. Sun, L. Wan and X. Wang, “Learning-based resource allocation strategy for industrial IoT in UAV-enabled MEC systems,” IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 5031–5040, 2020. [Google Scholar]

4. S. Talwani, J. Singla, G. Mathur, N. Malik, N. Z. Jhanjhi et al., “Machine-learning-based approach for virtual machine allocation and migration,” Electronics, vol. 11, no. 19, pp. 3249, 2022. [Google Scholar]

5. S. K. Mishra, S. Mishra, A. Alsayat, N. Z. Jhanjhi, M. Humayun et al., “Energy-aware task allocation for multi-cloud networks,” IEEE Access, vol. 8, pp. 178825–178834, 2020. [Google Scholar]

6. G. G. Wang, M. Lu, Y. Q. Dong and X. J. Zhao, “Self-adaptive extreme learning machine,” Neural Computing and Applications, vol. 27, no. 2, pp. 291–303, 2016. [Google Scholar]

7. Y. Wang, X. Qiao and G. G. Wang, “Architecture evolution of convolutional neural network using monarch butterfly optimization,” Journal of Ambient Intelligence and Humanized Computing, 2022. https://doi.org/10.1007/s12652-022-03766-4 [Google Scholar] [CrossRef]

8. W. Fan, Z. Chen, Z. Hao, Y. Su, F. Wu et al., “DNN deployment, task offloading, and resource allocation for joint task inference in IIoT,” IEEE Transactions on Industrial Informatics, 2022. https://doi.org/10.1109/TII.2022.3192882 [Google Scholar] [CrossRef]

9. H. Lu, C. Gu, F. Luo, W. Ding, S. Zheng et al., “Optimization of task offloading strategy for mobile edge computing based on multi-agent deep reinforcement learning,” IEEE Access, vol. 8, pp. 202573–202584, 2020. [Google Scholar]

10. Y. Wang, J. Yang, X. Guo and Z. Qu, “Satellite edge computing for the internet of things in aerospace,” Sensors, vol. 19, no. 20, pp. 4375, 2019. [Google Scholar] [PubMed]

11. Z. Wang, G. Xue, S. Qian and M. Li, “CampEdge: Distributed computation offloading strategy under large-scale AP-based edge computing system for IoT applications,” IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6733–6745, 2020. [Google Scholar]

12. S. Mao, S. He and J. Wu, “Joint UAV position optimization and resource scheduling in space-air-ground integrated networks with mixed cloud-edge computing,” IEEE Systems Journal, vol. 15, no. 3, pp. 3992–4002, 2020. [Google Scholar]

13. H. Guo and J. Liu, “UAV-enhanced intelligent offloading for internet of things at the edge,” IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp. 2737–2746, 2019. [Google Scholar]

14. Y. Zhang, X. Zhou, Y. Teng, J. Fang and W. Zheng, “Resource allocation for multi-user MEC system: Machine learning approaches,” in Int. Conf. on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, pp. 794–799, 2018. [Google Scholar]

15. J. Gao, Z. Kuang, J. Gao and L. Zhao, “Joint offloading scheduling and resource allocation in vehicular edge computing: A two layer solution,” IEEE Transactions on Vehicular Technology, 2022. https://doi.org/10.1109/TVT.2022.3220571 [Google Scholar] [CrossRef]

16. L. Tan, Z. Kuang, L. Zhao and A. Liu, “Energy-efficient joint task offloading and resource allocation in OFDMA-based collaborative edge computing,” IEEE Transactions on Wireless Communications, vol. 21, no. 3, pp. 1960–1972, 2021. [Google Scholar]

17. X. Deng, J. Yin, P. Guan, N. N. Xiong, L. Zhang et al., “Intelligent delay-aware partial computing task offloading for multi-user Industrial Internet of Things through edge computing,” IEEE Internet of Things Journal, 2021. https://doi.org/10.1109/JIOT.2021.3123406 [Google Scholar] [CrossRef]

18. Q. Chen, Z. Kuang and L. Zhao, “Multiuser computation offloading and resource allocation for cloud–edge heterogeneous network,” IEEE Internet of Things Journal, vol. 9, no. 5, pp. 3799–3811, 2021. [Google Scholar]

19. X. Zhu and M. Zhou, “Multiobjective optimized cloudlet deployment and task offloading for mobile-edge computing,” IEEE Internet of Things Journal, vol. 8, no. 20, pp. 15582–15595, 2021. [Google Scholar]

20. F. Dai, G. Liu, Q. Mo, W. Xu and B. Huang, “Task offloading for vehicular edge computing with edge-cloud cooperation,” World Wide Web, pp. 1–19, 2022. https://doi.org/10.1007/s11280-022-01011-8 [Google Scholar] [CrossRef]

21. X. Huang, Y. Cui, Q. Chen and J. Zhang, “Joint task offloading and QoS-aware resource allocation in fog-enabled Internet-of-Things networks,” IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7194–7206, 2020. [Google Scholar]

22. T. Alfakih, M. M. Hassan, A. Gumaei, C. Savaglio and G. Fortino, “Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on SARSA,” IEEE Access, vol. 8, pp. 54074–54084, 2020. [Google Scholar]

23. H. Zhao, J. Liu, H. Chen, J. Chen, Y. Li et al., “Intelligent diagnosis using continuous wavelet transform and gauss convolutional deep belief network,” IEEE Transactions on Reliability, 2022. https://doi.org/10.1109/TR.2022.3180273 [Google Scholar] [CrossRef]

24. N. M. Alfaer, H. M. Aljohani, S. Abdel-Khalek, A. S. Alghamdi and R. F. Mansour, “Fusion-based deep learning with nature-inspired algorithm for intracerebral haemorrhage diagnosis,” Journal of Healthcare Engineering, vol. 2022, pp. 1–12, 2022. [Google Scholar]

25. X. Chen and G. Liu, “Federated deep reinforcement learning-based task offloading and resource allocation for smart cities in a mobile edge network,” Sensors, vol. 22, no. 13, pp. 4738, 2022. [Google Scholar] [PubMed]


Cite This Article

APA Style
Abdullaev, I., Prodanova, N., Aruna Bhaskar, K., Laxmi Lydia, E., Kadry, S. et al. (2023). Task offloading and resource allocation in iot based mobile edge computing using deep learning. Computers, Materials & Continua, 76(2), 1463-1477. https://doi.org/10.32604/cmc.2023.038417
Vancouver Style
Abdullaev I, Prodanova N, Aruna Bhaskar K, Laxmi Lydia E, Kadry S, Kim J. Task offloading and resource allocation in iot based mobile edge computing using deep learning. Comput Mater Contin. 2023;76(2):1463-1477 https://doi.org/10.32604/cmc.2023.038417
IEEE Style
I. Abdullaev, N. Prodanova, K. Aruna Bhaskar, E. Laxmi Lydia, S. Kadry, and J. Kim, “Task Offloading and Resource Allocation in IoT Based Mobile Edge Computing Using Deep Learning,” Comput. Mater. Contin., vol. 76, no. 2, pp. 1463-1477, 2023. https://doi.org/10.32604/cmc.2023.038417


cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 999

    View

  • 446

    Download

  • 0

    Like

Share Link