IEEE TCI Article

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

IEEE TCI Article

Motion artifact reduction is one of the important research topics in MR imaging, as the motion artifact degrades image quality and makes diagnosis difficult. Recently, many deep learning approaches have been studied for motion artifact reduction. Unfortunately, most existing models are trained in a supervised manner, requiring paired motion-corrupted and motion-free images, or are based on a strict motion-corruption model, which limits their use for real-world situations.

Accurately imaging bones using ultrasound has been a long-standing challenge, primarily due to the high attenuation, significant acoustic impedance contrast at cortical boundaries, and the unknown distribution of sound velocity. Furthermore, two-dimensional (2-D) ultrasound bone imaging has limitations in diagnosing osteoporosis from a morphological perspective, as it lacks stereoscopic spatial information.

Indirect Time-of-Flight (iToF) sensors measure the received signal's phase shift or time delay to calculate depth. In realistic conditions, however, recovering depth is challenging as reflections from secondary scattering areas or translucent objects may interfere with the direct reflection, resulting in inaccurate 3D estimates. 

We propose a differentiable imaging framework to address uncertainty in measurement coordinates such as sensor locations and projection angles. We formulate the problem as measurement interpolation at unknown nodes supervised through the forward operator. To solve it we apply implicit neural networks, also known as neural fields, which are naturally differentiable with respect to the input coordinates. We also develop differentiable spline interpolators which perform as well as neural networks, require less time to optimize and have well-understood properties.

Light Fields (LFs) are easily degraded by noise and low light. Low light LF enhancement and denoising are more challenging than single image tasks because the epipolar information among views should be taken into consideration. In this work, we propose a multiple stream progressive restoration network to restore the whole LF in just one forward pass. To make full use of the multiple views supplementary information and preserve the epipolar information, we design three types of input composed of view stacking.

In the snapshot compressive imaging (SCI) field, how to explore priors for recovering the original high-dimensional data from its lower-dimensional measurements is a challenge. Recent plug-and-play efforts plugged by deep denoisers have achieved superior performance, and their convergences have been guaranteed under the assumption of bounded denoisers and the condition of diminishing noise levels. However, it is difficult to explicitly prove the bounded properties of existing deep denoisers due to complex network architectures.

Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic aberration, and detail distortion for enhancing low-light images using existing enhancement methods, this paper proposes an integrated learning approach (LightingNet) for low-light image enhancement. 

Reconstruction of CT images from a limited set of projections through an object is important in several applications ranging from medical imaging to industrial settings. As the number of available projections decreases, traditional reconstruction techniques such as the FDK algorithm and model-based iterative reconstruction methods perform poorly.

Robustness and stability of image-reconstruction algorithms have recently come under scrutiny. Their importance to medical imaging cannot be overstated. We review the known results for the topical variational regularization strategies ( 2 and 1 regularization) and present novel stability results for p -regularized linear inverse problems for p(1,) . Our results guarantee Lipschitz continuity for small p and Hölder continuity for larger p . They generalize well to the Lp (Ω)  function spaces.

Spatial-angular separable convolution (SAS-conv) has been widely used for efficient and effective 4D light field (LF) feature embedding in different tasks, which mimics a 4D convolution by alternatively operating on 2D spatial slices and 2D angular slices. In this paper, we argue that, despite its global intensity modeling capabilities, SAS-conv can only embed local geometry information into the features, resulting in inferior performances in the regions with textures and occlusions. Because the epipolar lines are highly related to the scene depth, we introduce the concept of spatial-angular correlated convolution (SAC-conv).

Pages

SPS ON X

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel