Computers, Materials & Continua DOI:10.32604/cmc.2022.019389 | |
Article |
Hyper-Convergence Storage Framework for EcoCloud Correlates
1Department of Computer Science, Virtual University of Pakistan, Lahore, 54000, Pakistan
2Department of Computer Science, Lahore Garrison University, Lahore, 54000, Pakistan
3Department of Statistics and Computer Science, University of Veterinary and Animal Sciences, Lahore, 54000, Pakistan
4Department of Industrial Engineering, Faculty of Engineering, Rabigh, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
5Department of Information Systems, Faculty of Computing and Information Technology-Rabigh, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
*Corresponding Author: Muhammad Hamid. Email: muhammad.hamid@uvas.edu.pk
Received: 12 April 2021; Accepted: 05 June 2021
Abstract: Cloud computing is an emerging domain that is capturing global users from all walks of life—the corporate sector, government sector, and social arena as well. Various cloud providers have offered multiple services and facilities to this audience and the number of providers is increasing very swiftly. This enormous pace is generating the requirement of a comprehensive ecosystem that shall provide a seamless and customized user environment not only to enhance the user experience but also to improve security, availability, accessibility, and latency. Emerging technology is providing robust solutions to many of our problems, the cloud platform is one of them. It is worth mentioning that these solutions are also amplifying the complexity and need of sustenance of these rapid solutions. As with cloud computing, new entrants as cloud service providers, resellers, tech-support, hardware manufacturers, and software developers appear on a daily basis. These actors playing their role in the growth and sustenance of the cloud ecosystem. Our objective is to use convergence for cloud services, software-defined networks, network function virtualization for infrastructure, cognition for pattern development, and knowledge repository. In order to gear up these processes, machine learning to induce intelligence to maintain ecosystem growth, to monitor performance, and to become able to make decisions for the sustenance of the ecosystem. Workloads may be programmed to “superficially” imitate most business applications and create large numbers using lightweight workload generators that merely stress the storage. In today's current IT environment, when many enterprises use the cloud to service some of their application demands, a different performance testing technique that assesses more than the storage is necessary. Compute and storage are merged into a single building block with HCI(Hyper-converged infrastructure), resulting in a huge pool of compute and storage resources when clustered with other building blocks. The novelty of this work to design and test cloud storage using the measurement of availability, downtime, and outage parameters. Results showed that the storage reliability in a hyper-converged system is above 92%.
Keywords: Virtual cloud; software define network; network function virtualization; hyper-convergence; virtualization
Cloud computing has become a popular technology platform that delivers various services over the internet world along with an effective “pay as you use” low-cost model. Due to its rapid growth, all large industries and organizations have switched their data to the cloud. Cloud computing has mitigated myriad issues regarding time, effort, and cost by providing services with the least amount, time, and effort. As per its basic model, cloud is having three types of services for respective users. Though there are hundreds of services developed and offered by cloud service providers but almost all services are linked to these three key service categories. Cloud computing provides access to cloud services, modules, and infrastructure through the internet due to its global accessibility [1].
Hyperconvergence is described as the replacement of proprietary hardware-defined storage and physically converged infrastructure with a software-defined storage infrastructure that is virtually converged inside the hypervisor tier. Hyperconvergence is a software-defined virtual architecture that combines hardware-defined features for computation, storage, networking, and administration.
Cloud computing is meant for the delivery of services utilizing a network. Cloud allows users to compute the resources for storing the data whether in a virtual machine, application, or software tool. It is considered the best service provider where the user needs to store the data and data storage not for temporary usage, but for permanently storing the data. Currently, technologies have changed the face of the internet with computing power. The computing era is changed from parallel computing to distributed computing, fog computing, and cloud computing. With the increase of internet traffic, people store and access data from different servers. Cloud computing appears as a technology that allows the remote storage and access of data. Cloud technology takes computation in terms of efficiency and software as a service [2].
The use of computational resources and the provision of cloud services ease the cloud user accessing data. It can be defined as an environment where computing resources required by one party can be outsourced from the other party and these resources can be accessed over the internet when required. The technology uses a distributed architecture that centralizes the resources of servers on the accessible platform so that the services can be provided to the user on demand. Cloud computing has various benefits and several advantages to users in terms of less pay per use cost for cloud resources like storage, compute and network in hyper-converged as shown in Fig. 1. Storage requirements can be increased or decreased according to the user requirements and can be adjusted with flexibility. Operational cost is also low as the users have to pay just for services they are using. This is termed as pay per use or subscription cost which is relatively less compared to maintain the actual resources [3].
Multiple nodes should be clustered together to generate pools of shared computing and storage resources for hyper-converged architectures to work effectively. Background services are always running to manage anything from fundamental internode communication to cluster-wide data transfer for data protection and robustness, as well as deduplication and compression for effective capacity usage. These services need cluster resources and therefore add to the list of non-application-specific factors that might affect app performance. To avoid application impact, well-engineered systems will have built-in techniques to limit resource consumption by underlying services.
By keeping the focus on reducing service response time, and performance of load balancing techniques for efficient traffic management cloud computing is a major challenge for big data applications. The solution to the challenges discussed above is a simple system, that requires less hardware, has only one point of management, a system that has all the resources that are validated, tested, and installed before deployment. A flexible system means where the addition or removal of nodes is easy, that can be easily shifted, and is more agile [4]. The system is cost-effective, requires less hardware, is easy to maintain, and is cheaper to deploy. The newly proposed model of hyper-converged infrastructure meets all the requirements. Today the world is transforming the idea of digitization so there is a need to transform the IT infrastructure too. This transformation [5] will result in the adoption of new IT techniques and focus on the agility of the business. As a result, there will be more efficient, agile, and responsive innovations. Hyper-Convergence Infrastructure (HCI) supports the business in this agility [6].
Hyper-convergence is defined as a software-defined infrastructure that integrates all the compute, network, and storage resources into a single unit supported by a single vendor that is deployed on and that runs on
“Hyperconverged infrastructure combines fundamental storage, compute, and networking operations into a single software solution or service”. It’s just a more tightly integrated convergent system, with computing, storage, and networks divorced from the underlying infrastructure and configured at the software level.
A hypervisor and a unit that will manage the hypervisor are also installed on the same PC. Hypervisor is the main administration point in HCI systems. A hypervisor is software that creates and runs virtual machines. The purpose of the hypervisor in the HCI stack is to keep an eye on the virtualization of servers. The infrastructure produces more cloud-like services. In the environment of the data center, reliability and performance of the system are the key factors. The HCI appliances provide improved reliability and performance as these appliances are passed through various testing and validating processes. The HCI systems are easy to deploy and they come with all the packages that are required for upgrading purposes or scaling the system [8].
The infrastructure also makes the installation, purchasing, and management of the hardware and software a much easy task as the customer does not have to spend a lot of time to select the individual hardware and software components that meet their workload requirements, that not only consumes a lot of time but also takes a cost a huge revenue. The HCI system was designed to meet the specified workload that will make purchasing the appliance easy [9].
IT industry does not need to buy the components separately. HCI appliances have the capacity of scaling and when all the components have integrated the management of the single unit becomes easy. For maintenance and up-gradation purposes these nodes can be swapped easily. Various integrated software interfaces are used to manage the operations of the infrastructure and all these operations are virtualized as shown in Fig. 2. HCI can be deployed in an organization in two common ways. One way is to build the infrastructure. For this purpose, the organization needs to buy servers and the HCI software. These components are then merged to create an HCI solution. The second way is to purchase an HCI solution that is configured and tested before purchase [10].
The ecosystem similarity is based on an expansive hypothetical establishment concerning firm connections and coordinated effort, between inter-organization networks, and multifaceted quality hypothesis. Advancement is a fundamental characteristic of ecosystem systems, which are prepared to do adjustments according to changes both inside the ecosystem and its respective environment [11].
Microprocessor and storage technologies, computer architecture and software systems, parallel algorithms, and distributed control mechanisms have all evolved over decades, paving the path for cloud computing. Cloud computing was made possible by the interconnection provided by an ever-evolving Internet using a magic box of hyper-convergent in terms of storage, compute and network. The servers in a cloud architecture communicate over high-bandwidth, low-latency networks, which are structured around a high-performance interconnect.
Structure Network comes about by breaking down ecosystem structures in a conceivable manner by catching the associations, association sorts, authoritative characteristics, and their connections. In a dynamic structure both internal and external can be the origins or by triggering the connection between ecosystem members. In the dynamic structure of the core ecosystem, the multi-sided platforms are connecting different members and adding the value of the platform players [12].
Hyper-convergence is the more efficient and latest technology brought about combining the different cloud service models through advanced hardware solutions. Using the HCI solution, the ease of management and infrastructure deployment is becoming easier and flexible. Many benefits are associated with HCI like cost reduction, higher scalability, data protection and efficient management of IT resources in terms of the virtual network, storage, and software-based architecture. Latest data center technology equipped with HCI that automates the data center operations like virtual machine deployment, monitoring, and pre-defined security policies. In a virtual environment, the resources are grouped into a magic box that provides efficient resource pooling, high performance, and better resilience [13].
These solutions can provide a blueprint for achieving a secure hyper-converged data center. This research aims to formalize an autonomous management model to evaluate a hyper-convergent virtual cloud ecosystem using cognitive management in hyper-convergence. It is important to note that emerging technologies are bringing more robust solutions to complex problems as we have discussed HCI but at the same time in itself, it is reaching another level of complexity that is merely not possible to be handled by humans alone. Therefore, artificial intelligence, machine learning, deep learning, and neural networks are becoming vital to manage this new level of complexity.
In artificial intelligence, we are engaging more in cognitive science to make AI-solutions more human-like. Our proposed model is meant to deal with cloud ecosystem which is a combination of multiple Cloud Service Providers (CSP), services, security, infrastructure, and users. Therefore, the complexity level is very high while our focus is also to deal with this complexity in a human manner which is done through incorporation of cognitive correlates along with machine learning into our proposed ICE.
The proposed model can evaluate the hyper-convergent virtual cloud ecosystem provided by the different cloud service providers. Cloud management uses the virtualized environment of versatile service providers. The focus of our proposed model ICE is to deal with the following factors:
• Optimized the existing cloud eco network to provide the centralized management of cloud systems and service-clustering through convergence services.
• Evolve the current cloud system in the heterogeneous environment using machine learning algorithms and map the cloud services to form a complete configuration cluster for hyper-convergent virtual cloud ecosystem.
• Creation and adaptation of new cloud service structure and virtual memories replicas for virtual clouds.
The responsibilities of the cloud ecosystem controller are to manage different types of cloud services like storage, compute, and network. All the cloud stakeholders like providers, resellers, and adopters have a cloud ecosystem network with many cloud services with different naming conventions and heterogeneity of cloud services. The role of the cloud ecosystem controller is very important for managing cloud service clusters and heterogeneity. The cloud ecosystem controller is further divided into three sub-modules–service cluster, ecosystem network, and heterogeneous ecosystem. All clouds are physically heterogeneous and run in their own data centre. Two or more cloud service providers can virtually connect on a common platform where all services, structures, and networks can combine in form of a virtual cloud as shown in Fig. 3.
Service mapping contains all the information of the basic cloud service model, service name, its description offered by the different service providers. There are different ways of cloud service mapping between cloud services to cloud service providers. The proposed infrastructure with the ability to make decisions based upon the knowledge obtained from the past will enhance the reliability and performance of the data center. The performance management module is linked directly to the learning module and the QoS services module. HCI is a very complicated infrastructure that is based on several components that work together to achieve the goals of reliability as shown in Tab. 1.
The parameters taken to measure the reliability of the storage of the system are availability, downtime, and outage. In the proposed infrastructure first, it was checked whether the storage services are available or not. After that, the downtime and outage time of the storage were measured. For this purpose, the services from different service providers were taken. The list of services is taken from the cloud harmony website (https://www.cloudharmony.com). In this section, the simulation results for the reliability of the storage are discussed [14]. The results are obtained using Mamdani Fuzzy Inference System in Fig. 4.
Membership functions show the mathematical functions that give statistical values of input and output variables. These functions are also present in the MATLAB toolbox [15]. The membership functions used by the proposed system are shown in Tab. 2.
Based upon the input variables the reliability values were designed on the assigned rules. Some of the rules were described in Tab. 3.
All the rules were generated using Mamdani fuzzy inference rule-making system in MATLAB. The rules diagram for the rules is shown in Fig. 5.
De-Fuzzifier is one of the basic segments of any decision-based autonomous system. There are various kinds of De-fuzzifier. In this study, centroid sort of De-fuzzifier is developed [16].
The Storage mathematical equation for De-fuzzifier is shown in Eq. (1).
Fig. 6 demonstrates the De-fuzzifier graphical representation of Storage Reliability for outage and availability. Fig. 7 demonstrates the De-fuzzifier graphical representation of Storage Reliability for downtown and availability.
The rules show that based upon certain values the reliability of the system is measured to be reliable, highly reliable, or not-reliable [17]. The system will be highly reliable only when the storage is highly available, the outage is low and downtime is low. The lookup diagrams for case 1 are shown in Fig. 8
The reliability of the system will be reliable if the availability is available, the downtime is medium and the outage of the services is medium which is shown in Fig. 9.
The system’s reliability will be Not-reliable when the availability is Not-available and downtime is low and the outage value is also low which is shown in Fig. 10.
An intelligent cloud ecosystem has been proposed, built, and tested with a focus on emerging demands of the Internet of Things, smart corporates, and smart cities. As all such applications of cloud and allied domains engaged not only sophisticated networks but data management is also an integral part of it. Therefore, in the proposed model, our objective is to provide such platform which can address these demands by using artificial intelligence and virtualization techniques. In our validation, all the components have provided tangible and favorable results to ensure the workability of the model. The intelligent cloud ecosystem is one contribution among many research frontiers on the horizon of cloud computing and data science. There are many areas and parameters that will emerge in swiftly changing environment and demands that they get as much versatile with every passing day. Our model is flexible enough to incorporate new allied parameters and learn new structures and services which are becoming essential for a sustainable system. The results showed more than 92% accuracy in reliability.
Acknowledgement: Thanks to our families and colleagues, who provided moral support.
Funding Statement: The authors received no specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding this study.
1. H. Yang, Q. Zhao, Z. Luan and D. Qian, “IMeter: An integrated VM power model based on performance profiling,” Future Generation Computer System, vol. 36, pp. 267–286, 2014. [Google Scholar]
2. G. Castañé, H. Xiong, D. Dong and J. Morrison, “An ontology for heterogeneous resources management interoperability and HPC in the cloud,” Future Generation Computer Systems, vol. 88, no. 7, pp. 373–384, 2018. [Google Scholar]
3. B. Mao, Y. Yang, S. Wu, H. Jiang and K. Li, “IOFollow: Improving the performance of VM live storage migration with IO following in the cloud,” Future Generation Computer System, vol. 91, no. 3, pp. 167–176, 2019. [Google Scholar]
4. V. Simic, B. Stojanovic and M. Ivanovic, “Optimizing the performance of optimization in the cloud environment: An intelligent auto-scaling approach,” Future Generation Computer System, vol. 101, no. 1, pp. 909–920, 2019. [Google Scholar]
5. B. Bibal and D. Dharma, “HAS: Hybrid auto-scaler for resource scaling in cloud environment,” Journal of Parallel Distributing Computer, vol. 120, no. 12, pp. 1–15, 2018. [Google Scholar]
6. R. Li, Q. Zheng, X. Li and Z. Yan, “Multi-objective optimization for rebalancing virtual machine placement,” Future Generation Computer Systems, vol. 105, no. 6, pp. 824–842, 2020. [Google Scholar]
7. Y. Cheng, W. Chen, Z. Wang, Z. Tang and Y. Xiang, “Smart VM co-scheduling with the precise prediction of performance characteristics,” Future Generation Computer System, vol. 105, no. 99, pp. 1016–1027, 2020. [Google Scholar]
8. D. Saxena and A. K. Singh, “A proactive autoscaling and energy-efficient VM allocation framework using online multi-resource neural network for cloud data center,” Neurocomputing, vol. 426, no. 3, pp. 248–264, 2021. [Google Scholar]
9. S. Zahra, M. Khan, M. Ali and S. Abbas, “Standardization of cloud security using mamdani fuzzifier,” International Journal of Advanced Computer Science and Applications, vol. 9, no. 3, pp. 292–297, 2018. [Google Scholar]
10. K. Ye, H. Shen, Y. Wang and C. Xu, “Multi-tier workload consolidations in the cloud: Profiling, modeling and optimization,” IEEE Tranaction on Cloud Computing, vol. 71, no. 3, pp. 1–9, 2020. [Google Scholar]
11. M. Shifrin, R. Mitrany, E. Biton and O. Gurewitz, “VM scaling and load balancing via cost optimal MDP solution,” IEEE Tranaction on Cloud Computing, vol. 71, no. 3, pp. 41–44, 2020. [Google Scholar]
12. M. Ciavotta, G. Gibilisco, D. Ardagna, E. Nitto, M. Lattuada et al., “Architectural design of cloud applications: a performance-aware cost minimization approach,” IEEE Tranaction on Cloud Computing, vol. 71, no. 3, pp. 110–116, 2020. [Google Scholar]
13. P. Kryszkiewicz, A. Kliks and H. Bogucka, “Small-scale spectrum aggregation and sharing,” IEEE Journal Selected Areas Communication, vol. 34, no. 10, pp. 2630–2641, 2016. [Google Scholar]
14. G. Levitin, L. Xing and Y. Xiang, “Reliability vs. vulnerability of N-version programming cloud service component with dynamic decision time under co-resident attacks,” IEEE Transaction Server Computing, vol. 1374, no. 3, pp. 1–10, 2020. [Google Scholar]
15. M. Aslanpour, M. Ghobaei and A. Nadjaran, “Auto-scaling web applications in clouds: A cost-aware approach,” Journal Network Computer Application, vol. 95, pp. 26–41, 2017. [Google Scholar]
16. V. Roussev, I. Ahmed, A. Barreto, S. McCulley and V. Shanmughan, “Cloud forensics-tool development studies & future outlook,” Digital Investigation, vol. 18, no. 2, pp. 79–95, 2016. [Google Scholar]
17. Xin Zhang, H. Chen, Y. Zhao and Z. Ma, “Improving cloud gaming experience through mobile edge computing,” IEEE Wireless Communications, vol. 26, no. 1, pp. 178–183, 2019. [Google Scholar]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |