A novel contrastive learning-based image restoration method by ’learning from history’, which dynamically generates negative samples from the historical models, which rejuvenates historical models as negative models, making it compatible with diverse image restoration tasks and without additional priors of the tasks.
Contrastive learning, a prominent technique in high-level vision tasks, has recently been applied to address the low-level vision tasks. In addition to minimizing the reconstruction error, contrastive learning-based image restoration methods aim to push the solution to be far apart from the negative samples. It helps to refine the solution space effectively, tackling the inherent challenges of ill-posed nature of image restoration. Nevertheless, the predominant methods of utilizing manually generated negative samples tailored to a certain image restoration task inherently introduce significant biases and limit the applicability to a wide range of image restoration challenges. To address these challenges, in this work, we develop a novel contrastive learning-based image restoration method by ’learning from history’, which dynamically generates negative samples from the historical models. Our approach, named model contrastive learning for image restoration (MCLIR), rejuvenates historical models as negative models, making it compatible with diverse image restoration tasks and without additional priors of the tasks. Furthermore, we fully exploit the statistical information from the negative models to generate an uncertainty map as a by-product of our model, to derive an uncertainty-aware reconstruction loss. Extensive experiments highlight the effectiveness of our proposed method. When implemented with existing models, MCLIR has shown significant improvements in a range of tasks and architectures, encompassing both single degradation tasks and all-in-one models. The code and retrained models can be accessed at https://github.com/Aitical/MCLIR.
Xianming Liu
2 papers