Open Access
ARTICLE
A DRL-Based Container Placement Scheme with Auxiliary Tasks
1 Beijing University of Posts and Telecommunications, Beijing, 100876, China.
2 China Electronics Standardization Institute, Beijing, China.
3 Communication Operation Center, State Grid Henan Electric Power Company Information & Telecommunication Company, Zhengzhou, China.
4 Institute of Technology Carlow, Carlow, Ireland.
* Corresponding Author: Chao Jia. Email: .
Computers, Materials & Continua 2020, 64(3), 1657-1671. https://doi.org/10.32604/cmc.2020.09840
Received 21 January 2020; Accepted 02 April 2020; Issue published 30 June 2020
Abstract
Container is an emerging virtualization technology and widely adopted in the cloud to provide services because of its lightweight, flexible, isolated and highly portable properties. Cloud services are often instantiated as clusters of interconnected containers. Due to the stochastic service arrival and complicated cloud environment, it is challenging to achieve an optimal container placement (CP) scheme. We propose to leverage Deep Reinforcement Learning (DRL) for solving CP problem, which is able to learn from experience interacting with the environment and does not rely on mathematical model or prior knowledge. However, applying DRL method directly dose not lead to a satisfying result because of sophisticated environment states and huge action spaces. In this paper, we propose UNREAL-CP, a DRL-based method to place container instances on servers while considering end to end delay and resource utilization cost. The proposed method is an actor-critic-based approach, which has advantages in dealing with the huge action space. Moreover, the idea of auxiliary learning is also included in our architecture. We design two auxiliary learning tasks about load balancing to improve algorithm performance. Compared to other DRL methods, extensive simulation results show that UNREAL-CP performs better up to 28.6% in terms of reducing delay and deployment cost with high training efficiency and responding speed.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.