Sub-Aperture Feature Adaptation in Single Image Super-resolution Model for Light Field Imaging

Aupendu Kar, Suresh Nehra, Jayanta Mukhopadhyay, Prabir Kumar Biswas

Department of Electronics and Electrical Communication Engineering
Indian Institute of Technology Kharagpur, India


machine learning image training diagram
Feature extraction and upscale module are from any pre-trained SISR model. The adaptation module aims to learn more information from multiple sub-aperture images. During training, the weights of the adaptation module are updated (shown as an 'unlocked' symbol), and the weights of two pre- trained modules are fixed (shown using 'locked' symbols)

 Abstract

With the availability of commercial Light Field (LF) cam- eras, LF imaging has emerged as an up-and-coming tech- nology in computational photography. However, the spatial resolution is significantly constrained in commercial micro- lens-based LF cameras because of the inherent multiplexing of spatial and angular information. Therefore, it becomes the main bottleneck for other applications of light field cameras. This paper proposes an adaptation module in a pre-trained Single Image Super-Resolution (SISR) network to leverage the powerful SISR model instead of using highly engineered light field imaging domain-specific Super Resolution models. The adaption module consists of a Sub-aperture Shift block and a fusion block. It is an adaptation in the SISR network to further exploit the spatial and angular information in LF images to improve the super-resolution performance. Exper- imental validation shows that the proposed method outper- forms existing light field super-resolution algorithms. It also achieves PSNR gains of more than 1 dB across all the datasets as compared to the same pre-trained SISR models for scale factor 2, and PSNR gains 0.6 − 1 dB for scale factor 4.


 Highlights

  1. We propose a light-field domain adaptation module to achieve LFSR using SISR models. To the best of our knowledge, this is the first work in this direction.
  2. We show that the proposed module can utilize angular information present in SA images to improve the per- formance, and ablation studies support our claims.
  3. Our qualitative and quantitative analysis shows that the performance of our method is better than light-field domain-specific super-resolution solutions, and any SISR models can adopt our proposed modification to make it work for LFSR.

 Proposed Module

proposed sub-aperture feature adaptation module image
The proposed sub-aperture feature adaptation module consists of n SAS modules and 1 fusion module. fi is the extracted feature of a sub-aperture image using a pre-trained model Ffeat and f'i is the modulated feature that contains more rich features that are acquired from other sub-aperture images. 'Conv, a, b, k' represents 2D convolution with a number of input channels, b number of output channels, and k is the kernel size. ule Ffeat, and another one is upscaling cum reconstruction module Fup. Ffeat extracts the salient features from a single image that is up-scaled by Fup. Our main objective is to in- troduce a module that modulates the extracted features from Ffeat by exploiting angular information across SAIs.

 Results

proposed sub-aperture feature adaptation module image
Table 1: PSNR/SSIM values achieved by different methods for 2x and 4xSR. Our results are shown in bold

 Ablation Study

Table number 2, caption is mentioned below.
Table 2: Model ablation studies of our proposed LFSAFA module and the effect of angular resolution on the reconstruction performance. All the experiments are performed on the LFSAFA-RDN variant for 2x SR.

Table number 3, caption is mentioned below.
Table 3: Comparative analysis of our proposed LFSAFA module-based LFSR models with their SISR counterparts.

 Visual Comparison

Image for qualitative comparison of our proposed LFSAFA-RDN
Qualitative comparison of our proposed LFSAFA-RDN with the existing LFSR algorithms for 4x SR

 Download

Arxiv image icon
Paper
Code
Google drive image icon
Training & Testing Datasets

 References

  • [7] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in CVPR, 2016, pp. 1646–1654.
  • [10] Yulun Zhang, Kunpeng Li, et al., “Image superresolution using very deep residual channel attention networks,” in ECCV, 2018, pp. 286–301
  • [11] Shuo Zhang, Youfang Lin, and Hao Sheng, “Residual networks for light field image super-resolution,” in CVPR, 2019, pp. 11046–11055.
  • [12] Henry Wing Fung Yeung, et al., “Light field spatial super-resolution using deep efficient spatialangular separable convolution,” IEEE TIP, vol. 28, no. 5, pp. 2319–2330, 2018.
  • [13] Yingqian Wang, Longguang Wang, et al., “Spatialangular interaction for light field image superresolution,” in ECCV, 2020.
  • [14] Yingqian Wang, Jungang Yang, et al., “Light field image super-resolution using deformable convolution,” IEEE TIP, vol. 30, pp. 1057–1071, 2020.
  • [20] Shuo Zhang, Song Chang, and Youfang Lin, “Endto-end light field spatial super-resolution network using multiple epipolar geometry,” IEEE TIP, vol. 30, pp. 5956–5968, 2021.
  • [21] Shunzhou Wang, Tianfei Zhou, Yao Lu, and Huijun Di, “Detail-preserving transformer for light field image super-resolution,” in AAAI, 2022.

 Citation (BibTeX)

@misc{kar2022subaperture,>
title={Sub-Aperture Feature Adaptation in Single Image Super-resolution Model for Light Field Imaging},
author={Aupendu Kar and Suresh Nehra and Jayanta Mukhopadhyay and Prabir Kumar Biswas},
booktitle={2022 IEEE International Conference on Image Processing (ICIP)},
year={2022},
organization={IEEE}
}