Anwar Ullah1, Xinguo Yu1,*, Muhammad Numan2
CMC-Computers, Materials & Continua, Vol.77, No.2, pp. 2359-2383, 2023, DOI:10.32604/cmc.2023.041219
- 29 November 2023
Abstract Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved, including digit deformation, noise interference between frames, blurred output, and the need for temporal coherence across frames. In this paper, we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network (DD-GAN). The DD-GAN comprises a Deep Deconvolutional Neural Network (DDNN) as a Generator (G) and a modified Deep Convolutional Neural Network (DCNN) as a Discriminator (D) to ensure temporal coherence between adjacent frames. The… More >