Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    REVIEW

    Cloud Datacenter Selection Using Service Broker Policies: A Survey

    Salam Al-E’mari1, Yousef Sanjalawe2,*, Ahmad Al-Daraiseh3, Mohammad Bany Taha4, Mohammad Aladaileh2

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.1, pp. 1-41, 2024, DOI:10.32604/cmes.2023.043627

    Abstract Amid the landscape of Cloud Computing (CC), the Cloud Datacenter (DC) stands as a conglomerate of physical servers, whose performance can be hindered by bottlenecks within the realm of proliferating CC services. A linchpin in CC’s performance, the Cloud Service Broker (CSB), orchestrates DC selection. Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck, endangering service quality. To tackle this, deploying an efficient CSB policy becomes imperative, optimizing DC selection to meet stringent Quality-of-Service (QoS) demands. Amidst numerous CSB policies, their implementation grapples with challenges like costs and availability. This article undertakes a holistic… More >

  • Open Access

    ARTICLE

    DATA CENTER ENERGY CONSERVATION BY HEAT PIPE BASED PRECOOLER SYSTEM

    Randeep Singh* , Masataka Mochizuki, Koichi Mashiko, Thang Nguyen*

    Frontiers in Heat and Mass Transfer, Vol.13, pp. 1-6, 2019, DOI:10.5098/hmt.13.24

    Abstract In the present paper, data center energy conservation systems based on the heat pipe heat exchanger (HPHE) pre-cooler to downsize the chiller capacity and working time has analysed, designed and discussed. The proposed system utilizes thermal diode character of heat pipe to transfer waste heat from source (pre-cooler coolant) to ambient and have been analyzed as per metrological conditions in New York. HPHE Pre cooler with 118 heat pipes and designed for 30 °C ambient temperature has been designed to effectively dissipate 30 kW or more datacenter heat throughout the year. The payback period of the HPHE Pre cooler is… More >

  • Open Access

    ARTICLE

    Improved Harris Hawks Optimization Algorithm Based Data Placement Strategy for Integrated Cloud and Edge Computing

    V. Nivethitha*, G. Aghila

    Intelligent Automation & Soft Computing, Vol.37, No.1, pp. 887-904, 2023, DOI:10.32604/iasc.2023.034247

    Abstract Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows. The individual tasks of a scientific workflow necessitate a diversified number of large states that are spatially located in different datacenters, thereby resulting in huge delays during data transmission. Edge computing minimizes the delays in data transmission and supports the fixed storage strategy for scientific workflow private datasets. Therefore, this fixed storage strategy creates huge amount of bottleneck in its storage capacity. At this juncture, integrating the merits of cloud computing and edge computing during the process of rationalizing the data placement of scientific workflows and… More >

  • Open Access

    ARTICLE

    Enhancing Security by Using GIFT and ECC Encryption Method in Multi-Tenant Datacenters

    Jin Wang1, Ying Liu1, Shuying Rao1, R. Simon Sherratt2, Jinbin Hu1,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3849-3865, 2023, DOI:10.32604/cmc.2023.037150

    Abstract Data security and user privacy have become crucial elements in multi-tenant data centers. Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements. In the data center network (DCN), short and long flows are sensitive to low latency and high throughput, respectively. The traditional security processing approaches, however, neglect these characteristics and requirements. This paper proposes a fine-grained security enhancement mechanism (SEM) to solve the problem of heterogeneous traffic and reduce the traffic completion time (FCT) of short flows while ensuring the security of multi-tenant traffic transmission. Specifically, for short flows in DCN,… More >

  • Open Access

    ARTICLE

    Congestion Control Using In-Network Telemetry for Lossless Datacenters

    Jin Wang1, Dongzhi Yuan1, Wangqing Luo1, Shuying Rao1, R. Simon Sherratt2, Jinbin Hu1,*

    CMC-Computers, Materials & Continua, Vol.75, No.1, pp. 1195-1212, 2023, DOI:10.32604/cmc.2023.035932

    Abstract In the Ethernet lossless Data Center Networks (DCNs) deployed with Priority-based Flow Control (PFC), the head-of-line blocking problem is still difficult to prevent due to PFC triggering under burst traffic scenarios even with the existing congestion control solutions. To address the head-of-line blocking problem of PFC, we propose a new congestion control mechanism. The key point of Congestion Control Using In-Network Telemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry (INT) technology to obtain comprehensive congestion information, which is then fed back to the sender to adjust the sending rate timely and accurately. It is possible to control congestion… More >

  • Open Access

    ARTICLE

    A Genetic Based Leader Election Algorithm for IoT Cloud Data Processing

    Samira Kanwal1, Zeshan Iqbal1, Aun Irtaza1, Rashid Ali2, Kamran Siddique3,*

    CMC-Computers, Materials & Continua, Vol.68, No.2, pp. 2469-2486, 2021, DOI:10.32604/cmc.2021.014709

    Abstract In IoT networks, nodes communicate with each other for computational services, data processing, and resource sharing. Most of the time huge data is generated at the network edge due to extensive communication between IoT devices. So, this tidal data is transferred to the cloud data center (CDC) for efficient processing and effective data storage. In CDC, leader nodes are responsible for higher performance, reliability, deadlock handling, reduced latency, and to provide cost-effective computational services to the users. However, the optimal leader selection is a computationally hard problem as several factors like memory, CPU MIPS, and bandwidth, etc., are needed to… More >

  • Open Access

    ARTICLE

    A Trusted NUMFabric Algorithm for Congestion Price Calculation at the Internet-of-Things Datacenter

    Shan Chun1, Xiaolong Chen2, Guoqiang Deng3,*, Hao Liu4

    CMES-Computer Modeling in Engineering & Sciences, Vol.126, No.3, pp. 1203-1216, 2021, DOI:10.32604/cmes.2021.012230

    Abstract The important issues of network TCP congestion control are how to compute the link price according to the link status and regulate the data sending rate based on link congestion pricing feedback information. However, it is difficult to predict the congestion state of the link-end accurately at the source. In this paper, we presented an improved NUMFabric algorithm for calculating the overall congestion price. In the proposed scheme, the whole network structure had been obtained by the central control server in the Software Defined Network, and a kind of dual-hierarchy algorithm for calculating overall network congestion price had been demonstrated.… More >

  • Open Access

    ARTICLE

    Application Centric Virtual Machine Placements to Minimize Bandwidth Utilization in Datacenters

    Muhammad Abdullah1,*, Saad Ahmad Khan1, Mamdouh Alenez2, Khaled Almustafa3, Waheed Iqbal1

    Intelligent Automation & Soft Computing, Vol.26, No.1, pp. 13-25, 2020, DOI:10.31209/2018.100000047

    Abstract An efficient placement of virtual machines (VMs) in a cloud datacenter is important to maximize the utilization of infrastructure. Most of the existing work maximises the number of VMs to place on a minimum number of physical machines (PMs) to reduce energy consumption. Recently, big data applications become popular which are mostly hosted on cloud datacenters. Big data applications are deployed on multiple VMs and considered data and communication intensive applications. These applications can consume most of the datacenter bandwidth if VMs do not place close to each other. In this paper, we investigate the use of different VM placement… More >

  • Open Access

    ARTICLE

    Task-Based Resource Allocation Bid in Edge Computing Micro Datacenter

    Yeting Guo1, Fang Liu2,*, Nong Xiao1, Zhengguo Chen1,3

    CMC-Computers, Materials & Continua, Vol.61, No.2, pp. 777-792, 2019, DOI:10.32604/cmc.2019.06366

    Abstract Edge computing attracts online service providers (SP) to offload services to edge computing micro datacenters that are close to end users. Such offloads reduce packet-loss rates, delays and delay jitter when responding to service requests. Simultaneously, edge computing resource providers (RP) are concerned with maximizing incomes by allocating limited resources to SPs. Most works on this topic make a simplified assumption that each SP has a fixed demand; however, in reality, SPs themselves may have multiple task-offloading alternatives. Thus, their demands could be flexibly changed, which could support finer-grained allocations and further improve the incomes for RPs. Here, we propose… More >

  • Open Access

    ARTICLE

    A Heterogeneous Virtual Machines Resource Allocation Scheme in Slices Architecture of 5G Edge Datacenter

    Changming Zhao1,2,*, Tiejun Wang2, Alan Yang3

    CMC-Computers, Materials & Continua, Vol.61, No.1, pp. 423-437, 2019, DOI:10.32604/cmc.2019.07501

    Abstract In the paper, we investigate the heterogeneous resource allocation scheme for virtual machines with slicing technology in the 5G/B5G edge computing environment. In general, the different slices for different task scenarios exist in the same edge layer synchronously. A lot of researches reveal that the virtual machines of different slices indicate strong heterogeneity with different reserved resource granularity. In the condition, the allocation process is a NP hard problem and difficult for the actual demand of the tasks in the strongly heterogeneous environment. Based on the slicing and container concept, we propose the resource allocation scheme named Two-Dimension allocation and… More >

Displaying 1-10 on page 1 of 10. Per Page