Computers, Materials & Continua DOI:10.32604/cmc.2022.023215 | |
Article |
Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT
1Department of Software Convergence, Soonchunhyang University, Asan, 31538, Korea
2Department of Computer Science, Kennesaw State University, Marietta, GA 30060, USA
3Department of Computer Software Engineering, Soonchunhyang University, Asan, 31538, Korea
*Corresponding Author: Seokhoon Kim. Email: seokhoon@sch.ac.kr
Received: 31 August 2021; Accepted: 15 October 2021
Abstract: Federated learning (FL) activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes. However, in large-scale heterogeneous Internet of Things (IoT) cellular networks, massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly. This paper introduces the system model of converging software-defined networking (SDN) and network functions virtualization (NFV) to enable device/resource abstractions and provide NFV-enabled edge FL (eFL) aggregation servers for advancing automation and controllability. Multi-agent deep Q-networks (MADQNs) target to enforce a self-learning softwarization, optimize resource allocation policies, and advocate computation offloading decisions. With gathered network conditions and resource states, the proposed agent aims to explore various actions for estimating expected long-term rewards in a particular state observation. In exploration phase, optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections. Action-based virtual network functions (VNF) forwarding graph (VNFFG) is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure (NFVI). The proposed scheme indicates deficient allocation actions, modifies the VNF backup instances, and reallocates the virtual resource for exploitation phase. Deep neural network (DNN) is used as a value function approximator, and epsilon-greedy algorithm balances exploration and exploitation. The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy. Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service (QoS) performance metrics, including packet drop ratio, packet drop counts, packet delivery ratio, delay, and throughput.
Keywords: Deep Q-networks; federated learning; network functions virtualization; quality of service; software-defined networking
The fast-growing deployment of Internet of Things (IoT) in cellular networks has exponentially increased in massive data volumes and heterogeneous service types with the requirement of ultra-reliable low-latency communication (URLLC). By 2025, International Data Corporation (IDC) forecasts that the growth of data generated from 41.6 billion IoT devices will reach 79.4 ZB, which requires big data orchestration and network automation to be intelligent and adequate in future scenarios [1,2]. To control abundant IoT taxonomies and provide sufficient resources, machine learning and deep learning algorithms have been applied to develop smart solutions in edge intelligence for various service purposes by gathering local data for model training and testing [3,4]. Meanwhile, because IoT deployment has grown rapidly in various privacy-sensitive sectors such as Internet of Healthcare Things (IoHT), Internet of Vehicles (IoV), and Internet of People (IoP), the uses of local raw data have to be user-consented and legally authorized before being transmitted to the central cloud [5,6]. With these challenging issues, an intelligent provisioning scheme necessitates considering the security of local data privacy, communication reliability, and adequate computation resources.
Federated learning (FL) secures local data privacy, reduces communication costs, and provides a latency-efficient approach by distributing global model selection and primary hyperparameters, denoted as
Multi-access edge computing (MEC) leverages computation powers and storage capacities of the central cloud to provide a latency-efficient system, adequate Quality of Service (QoS) performance, and additional serving resources in edge networks [12,13]. 5G radio access networks (RAN) support stable connectivity and adaptability between massive users and MEC entities for driving big data communication traffics with the deployment of millimeter-Wave (mmWave), multiple-input and multiple-output (MIMO) antennas, device-to-device (D2D), and radio resource management (RRM) functions. Moreover, to extend a global view of network environments and efficiently control heterogeneous MEC entities, software-defined networking (SDN) has been adopted. An adaptive transmission architecture in IoT networks is advanced by joint SDN and MEC federation to enable an intelligent edge optimization for low-deadline optimal path selection [14]. SDN separates the data plane (DP) and control plane (CP) to enable programmable functions, which adequately control the policies, flow tables, and actions on domain resources management within RAN, core side, network functions virtualization (NFV), and MEC [15,16]. The convergence of MEC, SDN, and NFV enables the networking application programming interfaces (API), sufficient resource pools, flexible orchestration, and programmability for logically enabling resource sharing virtualization in an adaptive approach. To optimally allocate the resources and recommend the offloading decisions within NFV infrastructure (NFVI)-MEC, an intelligent agent or deep reinforcement learning approaches have a capability to apply as enablers for network automation (eNA) in order to interact with particular IoT device statuses, resource utilization, and network congestion states.
Deep Q-network (DQN) has notably been used for addressing resource allocation and computation offloading problems in massive IoT networks [17]. There are three main procedures to construct DQN-based model, including epsilon-greedy strategy, deep neural network (DNN) function approximator, and q-learning algorithm based on Bellman equation for handling Markov decision process (MDP) problem. S, A, R, and
In this paper, the proposed system architecture is adopted to deploy multi-controller placement in NFV architecture for observing various state abstractions. Multi-agent DQNs (MADQNs) explores actions on resource placement and computation decisions for offloading
The rest of the paper is organized as follows. The system models, including architectural framework and preliminaries of proposed MADQNs components, are presented in Section 2. The proposed approach is thoroughly described in Section 3. In Section 4, simulation setup, performance metrics, reference schemes, and result discussions are shown. Section 5 presents the final conclusion.
In the system architecture, SDN CP allows a programmable DQN-based mechanism to observe states of the network environment via OpenFlow (OF) protocol in southbound interface (SBI), which allows the cluster head to contribute significant roles for data of IoT nodes and resource utilization collection [21]. The proposed SDN/NFV-enabled architecture for supporting MADQNs programmability and offering multiple eFL servers within NFVI-MEC environment is shown in Fig. 1. In the proposed system architecture, the centralized SDN controller communicates with NFV-MANO layer for management functions in VNF manager (VNFM) and VIM through orchestration interfaces [22]. Ve-Vnfm interface interacts between SDN controller as VNFs and VNFM for operating the lifecycle network services and resources management. Nf-Vi interface allows the controllability of NFVI resource pools for central SDN controller as a VIM [23]. To activate connectivity services between virtual machine (VM) and VNFs, Vn-Nf logical interface is used in the proposed architecture to adjust the virtual storage and computing resources based on VNFs mapping orchestration. To configure resource allocation based on optimal PARAA policy, decentralized SDN controllers as a VIM and VNFs are proposed in this scheme to formulate the parameterization of action-based VNFFG rendering for service function chaining (SFC) management system. The proposed MANO manages the VNF placement with appropriate element management system (EMS), virtual deployment unit (VDU), and VM capabilities based on allocation policy in particular congestion state spaces. The resource-constrained state observation leads to a prior for agents to adjust the backup instances with model service prioritization. After the resources are adjusted, PICOA computes the policy to advocate eFL server for local model aggregation offloading. Within multi-controllers, the flow entry installation process is configured reactively in the centralized entity. Each cluster head is commanded by OF protocol with a flow rule installation. Since proactive mode has the capability for each OF-enabled switch to set up the flow rules internally, the proposed agent controller will prioritize the reactive rule installation to ensure the proposed central policy configuration. Agent controller checks the packet flow with all the global tables and updates counters for instruction set executions. In our proposed scheme, the flow priority, hard timeout, and idle timeout are measured by the remaining MEC resources, time intervals, and criticalities of FL model services. However, if there is no match within global tables, the agent controller executes the add-flow method based on the particular state-action approximation to accordingly append datapath id, match details, actions, priority, and buffer id. With different dimensional features and scale values, SDN database entity is expected to handle the storage and preprocessing phases. For the proposed agent model, the data requiring from SDN database is uplink/downlink resource adjustment statuses, resource of eFL MEC nodes, and default core resource utilization system. With these features, the agent feasibly acquires the state observation spaces for sampling and exploring the potential actions.
In this context, main components of MADQNs consist of state, action, reward, and transition probability. For the hyperparameters, the values are optimized by standard parameterization for controlling the behavior of the learning model such as learning rate
State: in MADQNs environment, the state spaces are comprised of two main observations for PARAA and PICOA. For PARAA, the state consists of control statuses and a global functional view, including the extant maximum and minimum resources denoted as
Action: in this environment, the batch of potential actions refers to the resource updates and SFC, which are collectively mapped by VNFFG parameterization towards virtual MEC resource pools in the NFVI entity. Numerically, the action spaces a specify the discretization operation scale of increment, decrement, and static, denoted as
Reward: the intermediate reward in a particular time t, denoted as
Transition Probability: different policy determines distinct transition step for sampling the next state observation. In the early stage, the randomness of transition policy allows the agents to explore the actions without specified probabilities. However, once the exploration strategy reaches an optimal goal of resource allocation rewards, epsilon-greedy policy executes the transition, denotes as
3 Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications
To describe the MADQNs softwarization framework with the proposed controllers towards virtual resource allocation and eFL aggregation server selection, this section delivers two primary aspects of the proposed scheme, including the algorithm flows for multi-agent in NFVI-MEC and self-organizing agent controllers for collaborative updates in NFV-enabled eFL.
3.1 Algorithm Flow for MADQNs in Proposed Environment
To optimize the policy of the model, q-table and DNN are computing in parallel behavior to support the trade-off between time-critical and precision. However, DNN acts as a central control which structures as a prime approximator. Each potential state-action pair has a q-value that accumulates in both q-table and approximated DNN output layer after the exploration strategy. With a feedforward network, numerous weight initializations, neurons, and multiple layers of perceptron, the q-value decision-making is more accurate, yet execution time is simultaneously high. To optimize a policy for a long-term self-learning environment, the randomness in exploration processes of the networking environment has to be handled. The hyperparameters are required to be well-assigned and related to the fine-grained scenario. The optimal policy for exploitation strategy as the end goal is denoted as
The value function is computed for policy transformation and low-dimensional perspective to get the value of state s and create sample paths. It is significant to identify the resource condition at a particular time. To differentiate between each random exploration policy, the cumulative reward is a key value to maximize the expectation. Value function captures a vector of reward to follow a particular policy
DNN estimates the functions
By applying the proposed MADQNs model, the average reward aggregation for the resource allocation environment is obtained. The average reward output of optimal resource allocation is steady for most of the episodes but remains some downward marks towards limited resource utilization, which leads to unstable management. The steady and unsteady state-action pairs are detected and needed to be significantly enhanced for avoiding high packet drop scenarios in heterogeneous local model update communications.
3.2 Self-Organizing Agent Controllers for Optimal Edge Aggregation Decisions
The implicit algorithm flow is proposed to handle the instability of MADQNs model in NFVI-MEC environment by leveraging the capabilities of the agent controllers and orchestrator. The proposed method installs flow rules for each IoT cluster head with the adjustment of uplink/downlink resource utilization priority. The orchestrator configures VNFFG descriptors following the resource allocation policy from Algorithm 1 towards eFL aggregation with optimal MEC resource pools. The proposed agent controller requires to orchestrate the flow entry tables of multiple IoT cluster heads by applying the convergence of resource allocation policy and OF controller flow stats. Each state-action (
To prove the theoretical approach, this section describes the three main simulation adoption environments, including MADQNs model construction, SDN/NFV control performance, and 5G NR network experiment to capture the E2E QoS performances.
By using OpenAI Gym library [24], the environment setup requires four primary functions. The initialization (init) function declares the available characteristics of state observations (see Eqs. (1) and (2)) in the setup environment. The init explores
To capture the particular QoS performance metrics of the proposed controllers and NFV modules, mini-nfv on top of Mininet is used to create the data plane topology, VNF descriptors, and VNFFG descriptors. Mini-nfv supports the external SDN controller platform for experimentation. The forwarding rule installation is configured by FlowManager and RYU-based platform [27–31]. The descriptors set the VDU and VM capabilities based on selected actions from the optimal policy table. Each flow entry is configured following the forwarding graph. Fig. 4 presents the interaction of the convergence; however, the virtual links for communication perspective are still restricted for explicit fine-grained performance.
A discrete-event network simulator, namely ns-3, is used in this environment to perform the E2E convergence [32–34]. The simulation was executed for 430 s, which adjusted into 4 consecutive network congestion conditions to reflect the service-learning criticalities of FL communication reliability. In this setup, there are 4 eFL nodes, and the virtual extended networks loading was configured between 0 to 250. Additionally, there are 4 remote radio heads (RRHs), and the user data rate is between 20 to 72 Mbps. The model updates will rely on the network situation, and the congestion environment will increase the loss probability between clients and aggregation servers. The congestion states lowered the model accuracy and reduced global model reliability. The payload size was set to 1024 bytes, and QoS class identifier (QCI) mechanism is set as user datagram protocol (UDP). At the core side, the point-to-point (P2P) link bandwidth was configured to 9 Gb/s, and the buffer queuing discipline was operated by random early detection (RED) queue algorithm. The default link delay of MEC was configured as 2 ms. The hyperparameters of MADQNs are prior configured to conduct the experiments with maximized output expectations in terms of computation intensity and time constraints. The learning rate
4.2 Reference Schemes and Performance Metrics
To illustrate the proposed and reference approaches in overall performances, four different resource control and eFL selection policies were simulated. The resource pools represented the capacities extraction by the proposed actions of the model. Each scheme triggered different actions, which contained the VNFFG mapping to particular virtual resources. Reference schemes were simulated in control policy for IoT congestion scenarios, including maximal rate experienced-based eFL selection (MRES), single-agent DQN-control (SADQN), and MADQNs. The proposed scheme extended PARAA and PICOA policies by enhancing the deficient actions as described in Algorithm 2.
The QoS metrics which were used to evaluate the comparison between the reference and proposed approaches are presented as follows [35,36].
The packet drop ratio in the experimental simulation is the ratio formulation between total packet lost and total packet successfully transmitted. The packet drop counts are illustrated to specifically compare in this particular experimental setup. The packet delivery ratio in the simulation environment is calculated by the subtraction between the total ratio and packet drop ratio.
The proposed agent outputted the offloading decisions of 142, 117, 371, and 370 local model updates toward 4 eFL servers, respectively. In SDN/NFV-enabled architecture, the primary consideration is the QoS metrics after installing and executing the forwarding rules [37,38]. The comparison between proposed and reference schemes is shown in Fig. 5. Within 430 s of 4 consecutive network congestion conditions, the average control delay is 8.4723 ms, which was 28.2833, 25.6824, and 11.7175 ms lower than MRES, SADQN, and MADQNs, respectively.
In E2E simulation, the emphasis of FL model reliability in real-time routing networks was considered. Fig. 6a depicted the average delays of E2E communications in the edge cloud systems. The data communication between the aggregation servers were utilized the IP network communications. The graph presented the comparisons between the proposed and reference methods with various possibilities of forwarding paths. The proposed scheme performed an average delay of 12.8948 ms, which was 64.3321, 150.9983, and 169.9983 ms lower than MADQNs, SADQN, and MRES, respectively. The proposed scheme distinguished the loading metrics of every possible serving MEC server. The predicted metrics represented the loading statuses of MEC server; therefore, the MEC, which has the lowest loading metrics, will be considered as an optimal server for serving incoming local model update requests.
MADQNs deployed the control policies of both deficient and efficient output episodes. The downlink and uplink transmission are strongly congested in heavy multi-dimensional model updates, while multiple virtual MEC is offloaded and reallocated deficiently. To gain unoccupied resource pools for QoS assurances, the proposed scheme extended MADQNs and considered the optimal resource pools for high mission-critical FL model traffics, which covers the networking states with over-bottleneck peak hour circumstances. While the extant communication and computation resources are used, the proposed controllers and orchestrator advance the positive weights
In the congested FL communication networks, the local model
This paper proposed a multi-agent approach, including PARAA for optimizing virtual resource allocation and PICOA for recommending eFL aggregation server offloading, in order to meet the significance of URLLC for mission-critical IoT model services. SDN/NFV-enabled architectural framework for controlling the proposed forwarding rules and virtual resource orchestration is adopted in software-defined IoT networks. MADQNs model interacted with the gathered state observations and contributed a collection of exploration policies for sampling the allocation rules under the expansion of edge intelligence. To obtain deficient policies, the proposed algorithms targeted weak episodes with low aggregated rewards of optimal learning rate hyperparameter. The proposed agent controller outputs a setup of long-term self-organizing flow entry with sufficient computation and communications resource placement. The optimal actions are used to correspondingly configure the VNFFG descriptors and map towards adequate virtual MEC resource pools with four experimental congestion states. The simulation was conducted in three main aspects. Based on the validation, the proposed scheme contributed a promising approach for achieving efficient eFL communications in future massive IoT congestion states.
Funding Statement: This work was funded by BK21 FOUR (Fostering Outstanding Universities for Research) (No. 5199990914048), and this research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2020R1I1A3066543). In addition, this work was supported by the Soonchunhyang University Research Fund.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. F. Hussain, S. A. Hassan, R. Hussain and E. Hossain, “Machine learning for resource management in cellular and IoT networks: Potentials, current solutions, and open challenges,” IEEE Communications Surveys & Tutorials, vol. 22, no. 2, pp. 1251–1275, 2020. [Google Scholar]
2. D. Reinsel, J. Gantz and J. Rydning, “The digitalization of the world: From edge to core,” in IDC White Paper, Seagate Inc., Framingham, MA, USA, vol. 1, pp. 1–28, 2018. [Google Scholar]
3. S. Kim and D.-Y. Kim, “Adaptive data transmission method according to wireless state in long range wide area networks,” Computers, Materials & Continua, vol. 64, no. 1, pp. 1–15, 2020. [Google Scholar]
4. T. K. Rodrigues, K. Suto, H. Nishiyama, J. Liu and N. Kato, “Machine learning meets computation and communication control in evolving edge and cloud: Challenges and future perspective,” IEEE Communications Surveys & Tutorials, vol. 22, no. 1, pp. 38–67, 2020. [Google Scholar]
5. B. Custers, A. Sears, F. Dechesne, I. Georgieva, T. Tani et al., EU Personal Data Protection in Policy and Practice. Heidelberg, BE, DEU: Springer, 2019. [Online]. Available: https://doi.org/10.1007/978-94-6265-282-8. [Google Scholar]
6. W. Saeed, Z. Ahmad, A. I. Jehangiri, N. Mohamed, A. I. Umar et al., “A fault tolerant data management scheme for healthcare internet of things in fog computing,” KSII Transactions on Internet and Information Systems, vol. 15, no. 1, pp. 35–57, 2021. [Google Scholar]
7. B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. of the 20th Int. Conf. on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, vol. 54, pp. 1273–1282, 2017. [Google Scholar]
8. W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y. Liang et al., “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031–2063, 2020. [Google Scholar]
9. X. Mo and J. Xu, “Energy-efficient federated edge learning with joint communication and computation design,” Journal of Communications and Information Networks, vol. 6, no. 2, pp. 110–124, 2021. [Google Scholar]
10. Y. Ye, S. Li, F. Liu, Y. Tang and W. Hu, “EdgeFed: Optimized federated learning based on edge computing,” IEEE Access, vol. 8, pp. 209191–209198, 2020. [Google Scholar]
11. J. Ren, G. Yu and G. Ding, “Accelerating DNN training in wireless federated edge learning systems,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 219–232, 2021. [Google Scholar]
12. D.-Y. Kim, S. Kim and J. H. Park, “A combined network control approach for the edge cloud and LPWAN-based IoT services,” Concurrency and Computation: Practice and Experience, vol. 32, no. 1, 2020. [Online]. Available: https://doi.org/10.1002/cpe.4406. [Google Scholar]
13. Z. Li and Q. Zhu, “An offloading strategy for multi-user energy consumption optimization in multi-MEC scene,” KSII Transactions on Internet and Information Systems, vol. 14, no. 10, pp. 4025–4041, 2020. [Google Scholar]
14. X. Li, D. Li, J. Wan, C. Liu and M. Imran, “Adaptive transmission optimization in SDN-based industrial internet of things with edge computing,” IEEE Internet of Things Journal, vol. 5, no. 3, pp. 1351–1360, 2018. [Google Scholar]
15. S. Shahzadi, F. Ahmad, A. Basharat, M. Alruwaili, S. Alanazi et al., “Machine learning empowered security management and quality of service provision in SDN-NFV environment,” Computers, Materials & Continua, vol. 66, no. 3, pp. 2723–2749, 2021. [Google Scholar]
16. D.-Y. Kim and S. Kim, “Network-aided intelligent traffic steering in 5G mobile networks,” Computers, Materials & Continua, vol. 65, no. 1, pp. 243–261, 2020. [Google Scholar]
17. W. Chen, X. Qiu, T. Cai, H.-N. Dai, Z. Zheng et al., “Deep reinforcement learning for internet of things: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 23, no. 3, pp. 1659–1692, 2021. [Google Scholar]
18. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015. [Google Scholar]
19. N. Yuan, C. Jia, J. Lu, S. Gua, W. Li et al., “A DRL-based container placement scheme with auxiliary tasks,” Computers, Materials & Continua, vol. 64, no. 3, pp. 1657–1671, 2020. [Google Scholar]
20. T. Wu, P. Zhou, B. Wang, A. Li, X. Tang et al., “Joint traffic control and multi-channel reassignment for core backbone network in SDN-IoT: A multi-agent deep reinforcement learning approach,” IEEE Transactions on Network Science and Engineering, vol. 8, no. 1, pp. 231–245, 2021. [Google Scholar]
21. “OpenFlow switch specifications,” Open networking foundation, 2014. [Online]. Available: https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.3.4.pdf. [Google Scholar]
22. “Network functions virtualisation (NFV); ecosystem; report on SDN usage in NFV architectural framework,” White Paper, ETSI, Sophia Antipolis, France, 2015. [Online]. Available: https://www.etsi.org/deliver/etsi_gs/NFV-EVE/001_099/005/01.01.01_60/gs_nfv-eve005v010101p.pdf. [Google Scholar]
23. “Network functions virtualisation (NFV) release 2; management and orchestration; architectural framework specification,” White Paper, ETSI, Sophia Antipolis, France, 2021. [Online]. Available: https://www.etsi.org/deliver/etsi_gs/NFV/001_099/006/02.01.01_60/gs_NFV006v020101p.pdf. [Google Scholar]
24. G. Brockman, V. Cheung, L. Petterson, J. Schneider, J. Schulman et al., “OpenAI gym,” arXiv preprint arXiv: 1606.01540, 2016. [Google Scholar]
25. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen et al., “TensorFlow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv: 1603.04467, 2016. [Google Scholar]
26. F. Chollet, “Keras,” 2015. [Online]. Available: https://github.com/fchollet/keras. [Google Scholar]
27. B. Lantz, B. Heller and N. McKeown, “A network in a laptop: Rapidprototyping for software-defined networks,” in Proc. of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, New York, NY, USA, 2010. [Online]. Available: http://doi.acm.org/10.1145/1868447.1868466. [Google Scholar]
28. “Ryu,” Faucet Organisation. [Online]. Available: https://github.com/faucetsdn/ryu. [Google Scholar]
29. J. Castillo, “Mini-nfv framework,” 2018. [Online]. Available: https://github.com/josecastillolema/mini-nfv. [Google Scholar]
30. H. Babbar, S. Rani, M. Masud, S. Verma, D. Anand et al., “Load balancing algorithm for migrating switches in software-defined vehicular networks,” Computers, Materials & Continua, vol. 67, no. 1, pp. 1301–1316, 2021. [Google Scholar]
31. J. Ali, G.-M. Lee, B. Roh, D. K. Ryu and G. Park, “Software-defined networking approaches for link failure recovery: A survey,” Sustainability, vol. 12, no. 10, 2020. [Google Scholar]
32. G. F. Riley and T. R. Henderson, “The ns-3 network simulator,” in Modeling and Tools for Network Simulation, Berlin, Heidelberg: Springer, 2010. [Online]. Available: https://doi.org/10.1007/978-3-642-12331-3_2. [Google Scholar]
33. J. Ali and B. Roh, “An effective hierarchical control plane for software-defined networks leveraging TOPSIS for end-to-end QoS class-mapping,” IEEE Access, vol. 8, pp. 88990–89006, 2020. [Google Scholar]
34. S. Math, P. Tam and S. Kim, “Intelligent real-time IoT traffic steering in 5G edge networks,” Computers, Materials & Continua, vol. 67, no. 3, pp. 3433–3450, 2021. [Google Scholar]
35. J. Ali, B. Roh and S. Lee, “Qos improvement with an optimum controller selection for software-defined networks,” PLoS ONE, vol. 14, no. 5, pp. 1–37, 2019. [Google Scholar]
36. P. Tam, S. Math and S. Kim, “Intelligent massive traffic handling scheme in 5G bottleneck backhaul networks,” KSII Transactions on Internet and Information Systems, vol. 15, no. 3, pp. 874–890, 2021. [Google Scholar]
37. J. Ali and B. Roh, “Quality of service improvement with optimal software-defined networking controller and control plane clustering,” Computers, Materials & Continua, vol. 67, no. 1, pp. 849–875, 2021. [Google Scholar]
38. M. Beshley, N. Kryvinska, H. Beshley, M. Medvetskyi and L. Barolli, “Centralized QoS routing model for delay/loss sensitive flows at the SDN-ioT infrastructure,” Computers, Materials & Continua, vol. 69, no. 3, pp. 3727–3748, 2021. [Google Scholar]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |