IEEE Transactions on Computational Imaging

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

In the snapshot compressive imaging (SCI) field, how to explore priors for recovering the original high-dimensional data from its lower-dimensional measurements is a challenge. Recent plug-and-play efforts plugged by deep denoisers have achieved superior performance, and their convergences have been guaranteed under the assumption of bounded denoisers and the condition of diminishing noise levels. However, it is difficult to explicitly prove the bounded properties of existing deep denoisers due to complex network architectures.

Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic aberration, and detail distortion for enhancing low-light images using existing enhancement methods, this paper proposes an integrated learning approach (LightingNet) for low-light image enhancement. 

Reconstruction of CT images from a limited set of projections through an object is important in several applications ranging from medical imaging to industrial settings. As the number of available projections decreases, traditional reconstruction techniques such as the FDK algorithm and model-based iterative reconstruction methods perform poorly.

Robustness and stability of image-reconstruction algorithms have recently come under scrutiny. Their importance to medical imaging cannot be overstated. We review the known results for the topical variational regularization strategies ( 2 and 1 regularization) and present novel stability results for p -regularized linear inverse problems for p(1,) . Our results guarantee Lipschitz continuity for small p and Hölder continuity for larger p . They generalize well to the Lp (Ω)  function spaces.

Spatial-angular separable convolution (SAS-conv) has been widely used for efficient and effective 4D light field (LF) feature embedding in different tasks, which mimics a 4D convolution by alternatively operating on 2D spatial slices and 2D angular slices. In this paper, we argue that, despite its global intensity modeling capabilities, SAS-conv can only embed local geometry information into the features, resulting in inferior performances in the regions with textures and occlusions. Because the epipolar lines are highly related to the scene depth, we introduce the concept of spatial-angular correlated convolution (SAC-conv).

Hyperspectral imaging (HSI) has become an invaluable imaging tool for many applications in astrophysics or Earth observation. Unfortunately, direct observation of hyperspectral images is impossible since the actual measurements are 2-D and suffer from strong spatial and spectral degradations, especially in the infrared.

Superpixel provides local pixel coherence and respects object boundary, which is beneficial to stereo matching. Recently, superpixel cues are introduced into deep stereo networks. These methods develop a superpixel-based sampling scheme to downsample input color images and upsample output disparity maps. However, in this way, the image details are inevitably lost in the downsampling and the upsampling process introduces errors in the final disparity as well. Besides, this mechanism further limits the possibility of utilizing larger and multi-scale superpixels, which are important to alleviate the matching ambiguity.

We introduce an efficient synthetic electrode selection strategy for use in Adaptive Electrical Capacitance Volume Tomography (AECVT). The proposed strategy is based on the Adaptive Relevance Vector Machine (ARVM) method and allows to successively obtain synthetic electrode configurations that yield the most decrease in the image reconstruction uncertainty for the spatial distribution of the permittivity in the region of interest. 

In this paper, we explore the spatiospectral image super-resolution (SSSR) task, i.e., joint spatial and spectral super-resolution, which aims to generate a high spatial resolution hyperspectral image (HR-HSI) from a low spatial resolution multispectral image (LR-MSI). To tackle such a severely ill-posed problem, one straightforward but inefficient way is to sequentially perform a single image super-resolution (SISR) network followed by a spectral super-resolution (SSR) network in a two-stage manner or reverse order.

Conventional digital cameras typically accumulate all the photons within an exposure period to form a snapshot image. It requires the scene to be quite still during the imaging time, otherwise it would result in blurry image for the moving objects. Recently, a retina-inspired spike camera has been proposed and shown great potential for recording high-speed motion scenes. Instead of capturing the visual scene by a single snapshot, the spike camera records the dynamic light intensity variation continuously.

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel