Zero-shot Single Image Restoration through Controlled Perturbation of Koschmieder's Model

Aupendu Kar, Sobhan Kanti Dhara, Debashis Sen, Prabir Kumar Biswas

Department of Electronics and Electrical Communication Engineering
Indian Institute of Technology Kharagpur, India


Use the slider to compare before and after

Village top view image filled with smog or fog. Village top view image after removing the smog or fog using our technique
Haze Removal
Underwater coral image. Underwater coral image after using our technique
Underwater Enhancement
A view of low light image of a room Output image of the room after using our technique
Lowlight Enhancement

 Abstract

Real-world image degradation due to light scattering can be described based on the Koschmieder's model. Training deep models to restore such degraded images is challenging as real-world paired data is scarcely available and synthetic paired data may suffer from domain-shift issues. In this paper, a zero-shot single real-world image restoration model is proposed leveraging a theoretically deduced property of degradation through the Koschmieder's model. Our zero-shot network estimates the parameters of the Koschmieder's model, which describes the degradation in the input image, to perform image restoration. We show that a suitable degradation of the input image amounts to a controlled perturbation of the Koschmieder’s model that describes the image's formation. The optimization of the zero-shot network is achieved by seeking to maintain the relation between its estimates of Koschmieder’s model parameters before and after the controlled perturbation, along with the use of a few no-reference losses. Image dehazing and underwater image restoration are carried out using the proposed zero-shot framework, which in general outperforms the state-of-the-art quantitatively and subjectively on multiple standard real-world image datasets. Additionally, the application of our zero-shot framework for low-light image enhancement is also demonstrated.


 Highlights

  1. To the best of our knowledge, the proposed approach is the first that can be used for image restoration in all the application domains where the degradations can be formulated based on the Koschmieder's model.
  2. To the best of our knowledge, the proposed zero-shot learning approach for dehazing and underwater image restoration is first of a kind, where a prior based loss function or regularizer is not required. Further, our approach is probably the first zero-shot approach for underwater image restoration.
  3. Despite being a zero-shot approach, our approach outperforms or performs as good as the state-of-the-art in real-world image dehazing and underwater image restoration. We further demonstrate the use of our approach for low-light image enhancement, where the Koschmieder's model can be employed.

 Zero-Shot Learning Framework

Algorithm for Iterative Transmission and Atmospheric Light Update based Recurrent
                Neural Network for Single Image Dehazing
Training Model
Architectural Model for Iterative Transmission and Atmospheric Light Update based Recurrent
                Neural Network for Single Image Dehazing
Testing Model

 Loss Function

Transmission Relation Loss
Light Similarity Loss
Saturated Pixel Penalty
Gray-world Assumption Loss

 Results

Quantitative comparison of the different dehazing
                approaches on standard hazy image datasets. A Table.
Table 1. Quantitative comparison of the different dehazing approaches on standard hazy image datasets. Higher PSNR, SSIM, VI and RI are better, lower CIEDE2000 is better. (Best: Red highlight, Second best: Blue highlight)
Quantitative comparison of the different underwater image
                restoration approaches on standard underwater image datasets.
Table 2. Quantitative comparison of the different underwater image restoration approaches on standard underwater image datasets. Higher UIQM and UCIQE are better. (Best: Red highlight, Second best: Blue highlight)
Quantitative comparison of the different low-light image
                enhancement approaches on a standard low-light image dataset.
Table 3. Quantitative comparison of the different low-light image enhancement approaches on a standard low-light image dataset (LOL-v1 dataset, 15 test images). Our results on LOL-v2 images (100 test images) are PSNR-16.42, SSIM-0.57, CIEDE2000-46.90, LPIPS-VGG-0.41. Higher PSNR, SSIM are better, lower CIEDE2000 is better. (Best: Red highlight, Second best: Blue highlight)

 Ablation Studies

A study of importance of the different loss functions in our zero-shot learning framework. Haze reduction, color cast reduction
                and pixel saturation prevention may be noted. Input Image.
Input
A study of importance of the different loss functions in our zero-shot learning framework. Haze reduction, color cast reduction
                and pixel saturation prevention may be noted. Using L.
All Loss
 A study of importance of the different loss functions in our zero-shot learning framework. Haze reduction, color cast reduction
                and pixel saturation prevention may be noted. w/o LGW.
w/o Color Loss
 A study of importance of the different loss functions in our zero-shot learning framework. Haze reduction, color cast reduction
                and pixel saturation prevention may be noted. w/o LTR & LLS
w/o Similarity Loss
A study of importance of the different loss functions in our zero-shot learning framework. Haze reduction, color cast reduction
                and pixel saturation prevention may be noted. w/o LSPW & LSPB.
w/o Saturation Loss

 The Estimates in our Restoration Approach

Visualization of our approach’s estimates. Input Image.
Input (I)
Visualization of our approach’s estimates. T-Map.
T-map (t)
Visualization of our approach’s estimates. A/GB-light.
A/GB-light (A)
Visualization of our approach’s estimates. Output.
Output (J)

 Downloads

CVPR2021 image icon
Paper
Poster
Code
Google drive image icon
Results

 References

  • Swinehart, D.F., 1962. The beer-lambert law. Journal of chemical education, 39(7), p.333.
  • Gandelsman, Y., Shocher, A. and Irani, M., 2019. " Double-DIP": Unsupervised Image Decomposition via Coupled Deep-Image-Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11026-11035).
  • Li, B., Gou, Y., Liu, J.Z., Zhu, H., Zhou, J.T. and Peng, X., 2020. Zero-shot image dehazing. IEEE Transactions on Image Processing, 29, pp.8457-8466.
  • Zhang, L., Zhang, L., Liu, X., Shen, Y., Zhang, S. and Zhao, S., 2019, October. Zero-shot restoration of back-lit images using deep internal learning. In Proceedings of the 27th ACM International Conference on Multimedia (pp. 1623-1631).
  • Uplavikar, P.M., Wu, Z. and Wang, Z., 2019, May. All-in-One Underwater Image Enhancement Using Domain-Adversarial Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 1-8).

 Citation (BibTeX)

@InProceedings{Kar_2021_CVPR, author = {Kar, Aupendu and Dhara, Sobhan Kanti and Sen, Debashis and Biswas, Prabir Kumar},
title = {Zero-shot Single Image Restoration through Controlled Perturbation of Koschmieder’s Model},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {16205-16215}
 }