文章詳目資料

Journal of Computers EIMEDLINEScopus

  • 加入收藏
  • 下載文章
篇名 An End-to-End Multi-Scale Conditional Generative Adversarial Network for Image Deblurring
卷期 34:3
作者 Fei QiChen-Qing Wang
頁次 237-250
關鍵字 conditional generative adversarial networkimage deblurringmulti-scaleend-to-endEIMEDLINEScopus
出刊日期 202306
DOI 10.53106/199115992023063403017

中文摘要

英文摘要

For image deblurring, multi-scale approaches have been widely used as deep learning methods recently. In this paper, a novel multi-scale conditional generative adversarial network (CGAN) is proposed to make full use of image features, which outperforms most state-of-the-art methods. We define a generator network and a discriminator network. First of all, we use the multi-scale residual modules proposed in this paper as main feature extraction blocks, and add skip connections to extract multi-scale image features at a finer granularity in the generator network. Secondly, we construct PatchGAN as the discriminator network to enhance the local feature extraction capability. In addition, we combine the adversarial loss based on Wasserstein GAN with gradient penalty (WGAN-GP) theory with the content loss defined by perceptual loss as the total loss function, which is conducive to improving the consistency between the generated images and the ground-truth sharp images in content. The experimental results show that the method in this paper outperforms the state-of-the-art methods in visualization and quantitative results.

本卷期文章目次

相關文獻