Open Access iconOpen Access

ARTICLE

crossmark

Encoder-Decoder Based LSTM Model to Advance User QoE in 360-Degree Video

by Muhammad Usman Younus1,*, Rabia Shafi2, Ammar Rafiq3, Muhammad Rizwan Anjum4, Sharjeel Afridi5, Abdul Aleem Jamali6, Zulfiqar Ali Arain7

1 Ecole Doctorale Mathematiques, Informatique, Telecommunication, de Toulouse, University Paul Sabatier, Toulouse, 31330, France
2 School of Electronics and Information, Northwestern Polytechnical University, Xi'an, 710129, China
3 Department of Computer Science, NFC Institute of Engineering and Fertilizer Research, Faisalabad, 38000, Pakistan
4 Department of Electronic Engineering, The Islamia University of Bahawalpur, Bahawalpur, 63100, Pakistan
5 Department of Electrical Engineering, Sukkur IBA University, Sukkur, 65200, Pakistan
6 Department of Electronic Engineering, Quaid-e-Awam University of Engineering, Science and Technology (QUEST), Nawabshah, 67450, Pakistan
7 Department of Telecommunication Engineering, MUET, Jamshoro, 76060, Pakistan

* Corresponding Author: Muhammad Usman Younus. Email: email

(This article belongs to the Special Issue: Application of Machine-Learning in Computer Vision)

Computers, Materials & Continua 2022, 71(2), 2617-2631. https://doi.org/10.32604/cmc.2022.022236

Abstract

The development of multimedia content has resulted in a massive increase in network traffic for video streaming. It demands such types of solutions that can be addressed to obtain the user's Quality-of-Experience (QoE). 360-degree videos have already taken up the user's behavior by storm. However, the users only focus on the part of 360-degree videos, known as a viewport. Despite the immense hype, 360-degree videos convey a loathsome side effect about viewport prediction, making viewers feel uncomfortable because user viewport needs to be pre-fetched in advance. Ideally, we can minimize the bandwidth consumption if we know what the user motion in advance. Looking into the problem definition, we propose an Encoder-Decoder based Long-Short Term Memory (LSTM) model to more accurately capture the non-linear relationship between past and future viewport positions. This model takes the transforming data instead of taking the direct input to predict the future user movement. Then, this prediction model is combined with a rate adaptation approach that assigns the bitrates to various tiles for 360-degree video frames under a given network capacity. Hence, our proposed work aims to facilitate improved system performance when QoE parameters are jointly optimized. Some experiments were carried out and compared with existing work to prove the performance of the proposed model. Last but not least, the experiments implementation of our proposed work provides high user's QoE than its competitors.

Keywords


Cite This Article

APA Style
Younus, M.U., Shafi, R., Rafiq, A., Anjum, M.R., Afridi, S. et al. (2022). Encoder-decoder based LSTM model to advance user qoe in 360-degree video. Computers, Materials & Continua, 71(2), 2617-2631. https://doi.org/10.32604/cmc.2022.022236
Vancouver Style
Younus MU, Shafi R, Rafiq A, Anjum MR, Afridi S, Jamali AA, et al. Encoder-decoder based LSTM model to advance user qoe in 360-degree video. Comput Mater Contin. 2022;71(2):2617-2631 https://doi.org/10.32604/cmc.2022.022236
IEEE Style
M. U. Younus et al., “Encoder-Decoder Based LSTM Model to Advance User QoE in 360-Degree Video,” Comput. Mater. Contin., vol. 71, no. 2, pp. 2617-2631, 2022. https://doi.org/10.32604/cmc.2022.022236



cc Copyright © 2022 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1883

    View

  • 1397

    Download

  • 0

    Like

Share Link