Spatial-Angular Versatile Convolution for Light Field Reconstruction

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Spatial-Angular Versatile Convolution for Light Field Reconstruction

By: 
Zhen Cheng; Yutong Liu; Zhiwei Xiong

Spatial-angular separable convolution (SAS-conv) has been widely used for efficient and effective 4D light field (LF) feature embedding in different tasks, which mimics a 4D convolution by alternatively operating on 2D spatial slices and 2D angular slices. In this paper, we argue that, despite its global intensity modeling capabilities, SAS-conv can only embed local geometry information into the features, resulting in inferior performances in the regions with textures and occlusions. Because the epipolar lines are highly related to the scene depth, we introduce the concept of spatial-angular correlated convolution (SAC-conv). By alternating 2D convolutions on the vertical and horizontal epipolar slices, SAC-conv can embed global and robust geometry information into the features. We verify that SAS-conv and SAC-conv are skilled at different aspects of 4D LF feature embedding through a detailed feature and error analysis. Based on their complementarity, we further combine SAS-conv and SAC-conv by a parallel residual connection, forming a new spatial-angular versatile convolution (SAV-conv) module. We conduct comprehensive experiments on two representative LF reconstruction tasks, i.e., LF angular super-resolution and LF spatial super-resolution. Both the quantitative and qualitative results demonstrate that, without any extra parameters, networks upgraded with our proposed SAV-conv notably outperform those upgraded with SAS-conv and achieve a new state-of-the-art performance.

The 4D light field (LF), first introduced in [1], records the intensity of the light rays emitted from different positions and along different angles. With such redundant 4D spatio-angular information, LF imaging makes many new applications possible and popular, such as post-capture refocusing [2], seeing through occlusions [3] and stereoscopic displays [4]. Moreover, many classic computer vision tasks, such as super-resolution, depth estimation, and material recognition, benefit from 4D LFs compared with 2D images [5][6][7][8][9] due to the additional geometric information encoded in multiple adjacent views. With the prosperity of deep learning techniques, convolutional neural networks (CNNs) now dominate most LF processing tasks. For these tasks, efficient and effective modeling of the redundant 4D information for LF feature embedding is essential and has attracted increased research attention recently.

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel