Denoising Diffusion Models for

Plug-and-Play Image Restoration

1ETH Zürich 2Nanyang Technological University 3University of Würzburg 4KU Leuven
CVPR 2023 Workshop NTIRE
illustration

Illustration of our plug-and-play sampling method.

Abstract

Plug-and-play Image Restoration (IR) has been widely recognized as a flexible and interpretable method for solving various inverse problems by utilizing any off-the-shelf denoiser as the implicit image prior. However, most existing methods focus on discriminative Gaussian denoisers. Although diffusion models have shown impressive performance for high-quality image synthesis, their potential to serve as a generative denoiser prior to the plug-and-play IR methods remains to be further explored. While several other attempts have been made to adopt diffusion models for image restoration, they either fail to achieve satisfactory results or typically require an unacceptable number of Neural Function Evaluations (NFEs) during inference. This paper proposes DiffPIR, which integrates the traditional plug-and-play method into the diffusion sampling framework. Compared to plug-and-play IR methods that rely on discriminative Gaussian denoisers, DiffPIR is expected to inherit the generative ability of diffusion models. Experimental results on three representative IR tasks, including super-resolution, image deblurring, and inpainting, demonstrate that DiffPIR achieves state-of-the-art performance on both the FFHQ and ImageNet datasets in terms of reconstruction faithfulness and perceptual quality with no more than 100 NFEs.

Video presentation

Poster

BibTeX


        @inproceedings{zhu2023denoising, % DiffPIR
        title={Denoising Diffusion Models for Plug-and-Play Image Restoration},
        author={Yuanzhi Zhu and Kai Zhang and Jingyun Liang and Jiezhang Cao and Bihan Wen and Radu Timofte and Luc Van Gool},
        booktitle={IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE)},
        year={2023},}
      

Acknowledgments

This work was partly supported by the ETH Zurich General Fund (OK), the Alexander von Humboldt Foundation and the Huawei Fund.