Multi-scale adversarial diffusion network for image super-resolution
Multi-scale adversarial diffusion network for image super-resolution
Blog Article
Abstract Image super-resolution methods based on diffusion models have achieved remarkable success, but they still suffer from two significant limitations.On the one hand, this here algorithm requires a large number of denoising steps in the sampling process, which seriously limits the inference speed of the model.On the other hand, although the existing methods can generate diverse and detailed samples, they tend to perform unsatisfactorily on fidelity metrics such as the peak signal-to-noise ratio (PSNR).To address these challenges, this paper proposes a Multi-Scale Adversarial Diffusion Network (MSADN) based on super-resolution.A time-dependent discriminator is introduced to model complex multimodal distributions, significantly improving the efficiency of single-step sampling.
A Multi-Scale Generation Guidance (MSGG) module is designed to assist the model in learning feature information at different scales from low-resolution images, thereby enhancing its feature representation capability.Furthermore, to mitigate blurring artifacts introduced during the denoising process, a high-frequency loss function is proposed, targeting the residuals of high-frequency features between images.This ensures that the predicted images exhibit more realistic texture details.Experimental results indicate that, compared with other diffusion-based super-resolution methods, our approach provides a faster here inference speed and has superior performance on benchmark datasets.