IEEE Transactions on Computational Imaging

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Tomography has been widely used in many fields. The theoretical basis of tomography is the Radon transform, which is the line integral along a radial line oriented at a specific angle. In practice, the detector that collects the projection has a certain width, which does not coincide with the line integral. Therefore, the resolution of the reconstructed image will be reduced. In order to overcome the effect of the detector width on the reconstruction quality, some reconstruction methods have taken the influence of the detector width into account and have achieved high reconstruction quality, such as the distance-driven model (DDM) and the area integral model (AIM). 

Recent efforts on solving inverse problems in imaging via deep neural networks use architectures inspired by a fixed number of iterations of an optimization method. The number of iterations is typically quite small due to difficulties in training networks corresponding to more iterations; the resulting solvers cannot be run for more iterations at test time without incurring significant errors.

Given a spectral library, sparse unmixing aims to estimate the fractional proportions in each pixel of a hyperspectral image scene. However, the ever-growing dimensionality of spectral dictionaries strongly limits the performance of sparse unmixing algorithms. In this study, we propose a novel dictionary pruning (DP) approach to improve the performance of sparse unmixing algorithms, making them more accurate and time-efficient.

In cell and molecular biology, the fusion of green fluorescent protein (GFP) and phase contrast (PC) images aims to generate a composite image, which can simultaneously display the functional information in the GFP image related to the molecular distribution of biological living cells and the structural information in the PC image such as nucleus and mitochondria. In this paper, we propose a detail preserving cross network (DPCN), which consists of a structural-guided functional feature extraction branch (SFFEB), a functional-guided structural feature extraction branch (FSFEB) and a detail preserving module (DPM), to address the GFP and PC image fusion issue.

We present an all-in-one camera model that encompasses the architectures of most existing compressive-sensing light-field cameras, equipped with a single lens and multiple amplitude coded masks that can be placed at different positions between the lens and the sensor. The proposed model, named the equivalent multi-mask camera (EMMC) model, enables the comparison between different camera designs, e.g using monochrome or CFA-based sensors, single or multiple acquisitions, or varying pixel sizes, via a simple adaptation of the sampling operator. 

Recently, deep learning-based compressive imaging (DCI) has surpassed conventional compressive imaging in reconstruction quality and running speed. While multi-scale sampling has shown superior performance over single-scale, research in DCI has been limited to single-scale sampling. Despite training with single-scale images, DCI tends to favor low-frequency components similar to conventional multi-scale sampling, especially at low subrates. 

In this article, we propose a method to reconstruct the total electromagnetic field in an arbitrary two-dimensional scattering environment without any prior knowledge of the incident field or the permittivities of the scatterers. However, we assume that the region between the scatterers is homogeneous and that the approximate geometry describing the environment is known.

Most digital cameras use specialized autofocus sensors, such as phase detection, lidar or ultrasound, to directly measure focus state. However, such sensors increase cost and complexity without directly optimizing final image quality. This paper proposes a new pipeline for image-based autofocus and shows that neural image analysis finds focus 5-10x faster than traditional contrast enhancement. 

Non-line-of-sight (NLOS) imaging and tracking is an emerging technology that allows the shape or position of objects around corners or behind diffusers to be recovered from transient, time-of-flight measurements. However, existing NLOS approaches require the imaging system to scan a large area on a visible surface, where the indirect light paths of hidden objects are sampled.

Three-dimensional reconstruction of tomograms from optical projection microscopy is confronted with several drawbacks. In this paper we employ iterative reconstruction algorithms to avoid streak artefacts in the reconstruction and explore possible ways to optimize two parameters of the algorithms, i.e., iteration number and initialization, in order to improve the reconstruction performance. As benchmarks for direct reconstruction evaluation in optical projection tomography are absent, we consider the assessment through the performance of the segmentation on the 3D reconstruction. In our explorative experiments we use the zebrafish model system which is a typical specimen for use in optical projection tomography system; and as such frequently used.

Pages

SPS ON X

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel