Open AccessOpen Access

ARTICLE

Latency-Aware Dynamic Second Offloading Service in SDN-Based Fog Architecture

Samah Ibrahim AlShathri, Dina S. M. Hassan*, Samia Allaoua Chelloug

Information Technology Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 84428, Saudi Arabia

* Corresponding Author: Dina S. M. Hassan. Email:

Computers, Materials & Continua 2023, 75(1), 1501-1526. https://doi.org/10.32604/cmc.2023.035602

Abstract

Task offloading is a key strategy in Fog Computing (FC). The definition of resource-constrained devices no longer applies to sensors and Internet of Things (IoT) embedded system devices alone. Smart and mobile units can also be viewed as resource-constrained devices if the power, cloud applications, and data cloud are included in the set of required resources. In a cloud-fog-based architecture, a task instance running on an end device may need to be offloaded to a fog node to complete its execution. However, in a busy network, a second offloading decision is required when the fog node becomes overloaded. The possibility of offloading a task, for the second time, to a fog or a cloud node depends to a great extent on task importance, latency constraints, and required resources. This paper presents a dynamic service that determines which tasks can endure a second offloading. The task type, latency constraints, and amount of required resources are used to select the offloading destination node. This study proposes three heuristic offloading algorithms. Each algorithm targets a specific task type. An overloaded fog node can only issue one offloading request to execute one of these algorithms according to the task offloading priority. Offloading requests are sent to a Software Defined Networking (SDN) controller. The fog node and controller determine the number of offloaded tasks. Simulation results show that the average time required to select offloading nodes was improved by 33% when compared to the dynamic fog-to-fog offloading algorithm. The distribution of workload converges to a uniform distribution when offloading latency-sensitive non-urgent tasks. The lowest offloading priority is assigned to latency-sensitive tasks with hard deadlines. At least 70% of these tasks are offloaded to fog nodes that are one to three hops away from the overloaded node.

Keywords


Cite This Article

S. I. AlShathri, D. S. M. Hassan and S. A. Chelloug, "Latency-aware dynamic second offloading service in sdn-based fog architecture," Computers, Materials & Continua, vol. 75, no.1, pp. 1501–1526, 2023.



This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 161

    View

  • 106

    Download

  • 0

    Like

Share Link

WeChat scan