IEEE Transactions on Computational Imaging

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Sparsity and low-rank models have been popular for reconstructing images and videos from limited or corrupted measurements. Dictionary or transform learning methods are useful in applications such as denoising, inpainting, and medical image reconstruction.

Good temporal representations are crucial for video understanding, and the state-of-the-art video recognition framework is based on two-stream networks. In such framework, besides the regular ConvNets responsible for RGB frame inputs, a second network is introduced to handle the temporal representation, usually the optical flow (OF). 

Three-dimensional (3-D) radar imaging can provide additional information along elevation dimension about the target with respect to the conventional 2-D radar imaging, but usually requires a huge amount of data collected over 3-D frequency-azimuth-elevation space, which motivates us to perform 3-D imaging by using sparsely sampled data. Traditional compressive sensing (CS) based 3-D imaging methods with sparse data convert the 3-D data into a long vector, and then complete the sensing and recovery steps.

The challenges of real world applications of the laser detection and ranging (Lidar) three-dimensional (3-D) imaging require specialized algorithms. In this paper, a new reconstruction algorithm for single-photon 3-D Lidar images is presented that can deal with multiple tasks. 

In this paper, we present a full view optical flow estimation method for plenoptic imaging. Our method employs the structure delivered by the four-dimensional light field over multiple views making use of superpixels. These superpixels are four dimensional in nature and can be used to represent the objects in the scene as a set of slanted-planes in three-dimensional space so as to recover a piecewise rigid depth estimate.

Binary tomography is concerned with the recovery of binary images from a few of their projections (i.e., sums of the pixel values along various directions). To reconstruct an image from noisy projection data, one can pose it as a constrained least-squares problem.

The image blurring that results from moving a camera with the shutter open is normally regarded as undesirable. However, the blurring of the images encapsulates information that can be extracted to recover the light rays present within the scene. Given the correct recovery of the light rays that resulted in a blurred image, it is possible to reconstruct images...

The intrinsically limited spatial resolution of positron emission tomography (PET) confounds image quantitation. This paper presents an image deblurring and super-resolution framework for PET using anatomical guidance provided by high-resolution magnetic resonance (MR) images. The framework relies on image-domain postprocessing of already-reconstructed PET images by means of spatially variant deconvolution stabilized by an MR-based joint entropy penalty function.

DUAL-energy computed tomography (DECT) differentiates materials by exploiting the varying material linear attenuation coefficients (LACs) for different x-ray energy spectra. Multi-material decomposition (MMD) is a particularly attractive DECT clinical application to distinguish the complicated material components within the human body. 

Resolution enhancements are often desired in imaging applications where high-resolution sensor arrays are difficult to obtain. Many computational imaging methods have been proposed to encode high-resolution scene information on low-resolution sensors by cleverly modulating light from the scene before it hits the sensor. 

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel