TCI Articles

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

TCI Articles

TCI Articles

We introduce ICE-TIDE, a method for cryogenic electron tomography (cryo-ET) that simultaneously aligns observations and reconstructs a high-resolution volume. The alignment of tilt series in cryo-ET is a major problem limiting the resolution of reconstructions. ICE-TIDE relies on an efficient coordinate-based implicit neural representation of the volume which enables it to directly parameterize deformations and align the projections.

Magnetic Resonance Imaging (MRI) is a widely used imaging technique, however it has the limitation of long scanning time. Though previous model-based and learning-based MRI reconstruction methods have shown promising performance, most of them have not fully utilized the edge prior of MR images, and there is still much room for improvement.

While snapshot hyperspectral cameras are cheaper and faster than imagers based on pushbroom or whiskbroom spatial scanning, the output imagery from a snapshot camera typically has different spectral bands mapped to different spatial locations in a mosaic pattern, requiring a demosaicing process to be applied to generate the desired hyperspectral image with full spatial and spectral resolution.

Motion artifact reduction is one of the important research topics in MR imaging, as the motion artifact degrades image quality and makes diagnosis difficult. Recently, many deep learning approaches have been studied for motion artifact reduction. Unfortunately, most existing models are trained in a supervised manner, requiring paired motion-corrupted and motion-free images, or are based on a strict motion-corruption model, which limits their use for real-world situations.

Accurately imaging bones using ultrasound has been a long-standing challenge, primarily due to the high attenuation, significant acoustic impedance contrast at cortical boundaries, and the unknown distribution of sound velocity. Furthermore, two-dimensional (2-D) ultrasound bone imaging has limitations in diagnosing osteoporosis from a morphological perspective, as it lacks stereoscopic spatial information.

Indirect Time-of-Flight (iToF) sensors measure the received signal's phase shift or time delay to calculate depth. In realistic conditions, however, recovering depth is challenging as reflections from secondary scattering areas or translucent objects may interfere with the direct reflection, resulting in inaccurate 3D estimates. 

We propose a differentiable imaging framework to address uncertainty in measurement coordinates such as sensor locations and projection angles. We formulate the problem as measurement interpolation at unknown nodes supervised through the forward operator. To solve it we apply implicit neural networks, also known as neural fields, which are naturally differentiable with respect to the input coordinates. We also develop differentiable spline interpolators which perform as well as neural networks, require less time to optimize and have well-understood properties.

Light Fields (LFs) are easily degraded by noise and low light. Low light LF enhancement and denoising are more challenging than single image tasks because the epipolar information among views should be taken into consideration. In this work, we propose a multiple stream progressive restoration network to restore the whole LF in just one forward pass. To make full use of the multiple views supplementary information and preserve the epipolar information, we design three types of input composed of view stacking.

In the snapshot compressive imaging (SCI) field, how to explore priors for recovering the original high-dimensional data from its lower-dimensional measurements is a challenge. Recent plug-and-play efforts plugged by deep denoisers have achieved superior performance, and their convergences have been guaranteed under the assumption of bounded denoisers and the condition of diminishing noise levels. However, it is difficult to explicitly prove the bounded properties of existing deep denoisers due to complex network architectures.

Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic aberration, and detail distortion for enhancing low-light images using existing enhancement methods, this paper proposes an integrated learning approach (LightingNet) for low-light image enhancement. 

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel