IEEE TMM Article

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

IEEE TMM Article

In this paper, we investigate the challenging task of removing haze from a single natural image. The analysis on the haze formation model shows that the atmospheric veil has much less relevance to chrominance than luminance, which motivates us to neglect the haze in the chrominance channel and concentrate on the luminance channel in the dehazing process. Besides, the experimental study illustrates that the YUV color space is most suitable for image dehazing.

Video summarization is an important technique to browse, manage and retrieve a large amount of videos efficiently. The main objective of video summarization is to minimize the information loss when selecting a subset of video frames from the original video, hence the summary video can faithfully represent the overall story of the original video. Recently developed unsupervised video summarization approaches are free of requiring tedious annotation on important frames to train a video summarization model and thus are practically attractive.

With the help of convolutional neural networks (CNNs), video-based human action recognition has made significant progress. CNN features that are spatial and channelwise can provide rich information for powerful image description. However, CNNs lack the ability to process the long-term temporal dependency of an entire video and further cannot well focus on the informative motion regions of actions.

Scene text plays a significant role in image and video understanding, which has made great progress in recent years. Most existing models on text detection in the wild have the assumption that all the texts are surrounded by a rotated rectangle or quadrangle. While there also exist lots of curved texts in the wild, which would not be bounded by a regular bounding box. 

It is a research hotspot to restore decoded videos with existing bitstreams by applying deep neural network to improve compression efficiency at decoder-end. Existing research has verified that the utilization of redundancy at decoder-end, which is underused by the encoder, can bring an increase of compression efficiency.

Wavelet transform is a powerful tool for multiresolution time-frequency analysis. It has been widely adopted in many image processing tasks, such as denoising, enhancement, fusion, and especially compression. Wavelets lead to the successful image coding standard JPEG-2000.

In this paper, a novel single image super-resolution (SR) method based on progressive-iterative approximation is proposed. To preserve textures and clear edges, the image SR reconstruction is treated as an image progressive-iterative fitting procedure and achieved by iterative interpolation. 

In High Efficiency Video Coding (HEVC), multiple-QP (quantization parameter) optimization can adapt to a local video content. However, the multiple-QP implementation in the HEVC reference software (HM 16.6) achieves the best QP value for each coding block with a large amount of computational complexity.

Recent efforts have been made on acoustic scene classification in the audio signal processing community. In contrast, few studies have been conducted on acoustic scene clustering, which is a newly emerging problem. Acoustic scene clustering aims at merging the audio recordings of the same class of acoustic scene into a single cluster without using prior information and training classifiers. In this study, we propose a method for acoustic scene clustering that jointly optimizes the procedures of feature learning and clustering iteration.

Conventional video saliency detection methods frequently follow the common bottom-up thread to estimate video saliency within the short-term fashion. As a result, such methods can not avoid the obstinate accumulation of errors when the collected low-level clues are constantly ill-detected. Also, being noticed that a portion of video frames, which are not nearby the current video frame over the time axis, may potentially benefit the saliency detection in the current video frame.

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel