Aswathy Ravikumar, Harini Sriraman*
Computer Systems Science and Engineering, Vol.46, No.1, pp. 563-578, 2023, DOI:10.32604/csse.2023.034710
- 20 January 2023
Abstract Deep neural networks are gaining importance and popularity in applications and services. Due to the enormous number of learnable parameters and datasets, the training of neural networks is computationally costly. Parallel and distributed computation-based strategies are used to accelerate this training process. Generative Adversarial Networks (GAN) are a recent technological achievement in deep learning. These generative models are computationally expensive because a GAN consists of two neural networks and trains on enormous datasets. Typically, a GAN is trained on a single server. Conventional deep learning accelerator designs are challenged by the unique properties of GAN,… More >