Open Access iconOpen Access

ARTICLE

Image to Image Translation Based on Differential Image Pix2Pix Model

by Xi Zhao1, Haizheng Yu1,*, Hong Bian2

1 College of Mathematics and System Sciences, Xinjiang University, Urumqi, 830017, China
2 School of Mathematical Sciences, Xinjiang Normal University, Urumqi, 830017, China

* Corresponding Author: Haizheng Yu. Email: email

Computers, Materials & Continua 2023, 77(1), 181-198. https://doi.org/10.32604/cmc.2023.041479

Abstract

In recent years, Pix2Pix, a model within the domain of GANs, has found widespread application in the field of image-to-image translation. However, traditional Pix2Pix models suffer from significant drawbacks in image generation, such as the loss of important information features during the encoding and decoding processes, as well as a lack of constraints during the training process. To address these issues and improve the quality of Pix2Pix-generated images, this paper introduces two key enhancements. Firstly, to reduce information loss during encoding and decoding, we utilize the U-Net++ network as the generator for the Pix2Pix model, incorporating denser skip-connection to minimize information loss. Secondly, to enhance constraints during image generation, we introduce a specialized discriminator designed to distinguish differential images, further enhancing the quality of the generated images. We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model. The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics. Notably, the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics. An analysis of the experimental results reveals that the use of the U-Net++ generator effectively reduces information feature loss, while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training. Both of these enhancements collectively improve the quality of Pix2Pix-generated images.

Keywords


Cite This Article

APA Style
Zhao, X., Yu, H., Bian, H. (2023). Image to image translation based on differential image pix2pix model. Computers, Materials & Continua, 77(1), 181-198. https://doi.org/10.32604/cmc.2023.041479
Vancouver Style
Zhao X, Yu H, Bian H. Image to image translation based on differential image pix2pix model. Comput Mater Contin. 2023;77(1):181-198 https://doi.org/10.32604/cmc.2023.041479
IEEE Style
X. Zhao, H. Yu, and H. Bian, “Image to Image Translation Based on Differential Image Pix2Pix Model,” Comput. Mater. Contin., vol. 77, no. 1, pp. 181-198, 2023. https://doi.org/10.32604/cmc.2023.041479



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 945

    View

  • 448

    Download

  • 0

    Like

Share Link