Open Access
ARTICLE
Convolutional Neural Networks Based Video Reconstruction and Computation in Digital Twins
1 Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600 062, India
2 Department of Computer Science and Engineering, Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, 500 090, India
3 Department of Instrumentation and Control Engineering, Sri Sairam Engineering College, Chennai, 602 109, India
4 Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, 600 123, India
5 Department of Statistics, Vishwakarma University, Pune, 411 048, India
6 Department of Computer Applications, Madanapalle Institute of Technology & Science, Madanapalle, 517 325, India
7 Department of Electrical Engineering, Model Institute of Engineering and Technology, Jammu, 181 122, India
8 Department of Computer Science and Engineering, R.M.K Engineering College, Kavaraipettai, 601 206, India
* Corresponding Author: S. Neelakandan. Email:
Intelligent Automation & Soft Computing 2022, 34(3), 1571-1586. https://doi.org/10.32604/iasc.2022.026385
Received 24 December 2021; Accepted 27 January 2022; Issue published 25 May 2022
Abstract
With the advancement of communication and computing technologies, multimedia technologies involving video and image applications have become an important part of the information society and have become inextricably linked to people's daily productivity and lives. Simultaneously, there is a growing interest in super-resolution (SR) video reconstruction techniques. At the moment, the design of digital twins in video computing and video reconstruction is based on a number of difficult issues. Although there are several SR reconstruction techniques available in the literature, most of the works have not considered the spatio-temporal relationship between the video frames. With this motivation in mind, this paper presents VDCNN-SS, a novel very deep convolutional neural networks (VDCNN) with spatiotemporal similarity (SS) model for video reconstruction in digital twins. The VDCNN-SS technique proposed here maps the relationship between interconnected low resolution (LR) and high resolution (HR) image blocks. It also considers the spatiotemporal non-local complementary and repetitive data among nearby low-resolution video frames. Furthermore, the VDCNN technique is used to learn the LR–HR correlation mapping learning process. A series of simulations were run to examine the improved performance of the VDCNN-SS model, and the experimental results demonstrated the superiority of the VDCNN-SS technique over recent techniques.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.