IEEE Transactions on Image Processing

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Signal decomposition is a classical problem in signal processing, which aims to separate an observed signal into two or more components, each with its own property. Usually, each component is described by its own subspace or dictionary. Extensive research has been done for the case where the components are additive, but in real-world applications, the components are often non-additive.

The surface normal estimation from photometric stereo becomes less reliable when the surface reflectance deviates from the Lambertian assumption. The non-Lambertian effect can be explicitly addressed by physics modeling to the reflectance function, at the cost of introducing highly nonlinear optimization.

Being able to cover a wide range of views, pan-tilt-zoom (PTZ) cameras have been widely deployed in visual surveillance systems. To achieve a global-view perception of a surveillance scene, it is necessary to generate its panoramic background image, which can be used for the subsequent applications such as road segmentation, active tracking, and so on.

In this paper, we propose a Group-Sparse Representation-based method with applications to Face Recognition (GSR-FR). The novel sparse representation variational model includes a non-convex sparsity-inducing penalty and a robust non-convex loss function. The penalty encourages group sparsity by using an approximation of the 0 -quasinorm, and the loss function is chosen to make the algorithm robust to noise, occlusions, and disguises. 

We present an image captioning framework that generates captions under a given topic. The topic candidates are extracted from the caption corpus. A given image’s topics are then selected from these candidates by a CNN-based multi-label classifier. The input to the caption generation model is an image-topic pair, and the output is a caption of the image.

Most variational formulations for structure-texture image decomposition force the structure images to have small norm in some functional spaces and to share a common notion of edges, i.e., large-gradients or large-intensity differences. However, such a definition makes it difficult to distinguish structure edges from oscillations that have fine spatial scale but high contrast. In this paper, we introduce a new model by learning deep variational priors for structure images without explicit training data. An alternating direction method of a multiplier algorithm and its modular structure are adopted to plug deep variational priors into an iterative smoothing process.

Hashing is a promising approach for compact storage and efficient retrieval of big data. Compared to the conventional hashing methods using handcrafted features, emerging deep hashing approaches employ deep neural networks to learn both feature representations and hash functions, which have been proven to be more powerful and robust in real-world applications. 

Fractional interpolation is used to provide sub-pixel level references for motion compensation in the interprediction of video coding, which attempts to remove temporal redundancy in video sequences. Traditional handcrafted fractional interpolation filters face the challenge of modeling discontinuous regions in videos, while existing deep learning-based methods are either designed for a single quantization parameter (QP), only generating half-pixel samples, or need to train a model for each sub-pixel position.

Recent studies have shown the effectiveness of using depth information in salient object detection. However, the most commonly seen images so far are still RGB images that do not contain the depth data. 

Deep convolutional neural networks (CNNs) have revolutionized the computer vision research and have seen unprecedented adoption for multiple tasks, such as classification, detection, and caption generation. However, they offer little transparency into their inner workings and are often treated as black boxes that deliver excellent performance.

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel