Skip to main content

IEEE TMM Article

Raw Image Deblurring

Deep learning-based blind image deblurring plays an essential role in solving image blur since all existing kernels are limited in modeling the real world blur. Thus far, researchers focus on powerful models to handle the deblurring problem and achieve decent results. For this work, in a new aspect, we discover the great opportunity for image enhancement (e.g., deblurring) directly from RAW images and investigate novel neural network structures benefiting RAW-based learning.

Read more

Correlation Graph Convolutional Network for Pedestrian Attribute Recognition

The pedestrian attribute recognition aims at generating the structured description of pedestrian, which plays an important role in surveillance. However, it is difficult to achieve accurate recognition results due to diverse illumination, partial body occlusion and limited resolutions. Therefore, this paper proposes a comprehensive relationship framework for comprehensively describing and utilizing relations among attributes, describing different type of relations in the same dimension, and implementing complex transfers of relations in a GCN manner. 

Read more

LensCast: Robust Wireless Video Transmission Over MmWave MIMO With Lens Antenna Array

In this paper, we present LensCast, a novel cross-layer video transmission framework for wireless networks, which seamlessly integrates millimeter wave (mmWave) lens multiple-input multiple-output (MIMO) with robust video transmission. LensCast is designed to exploit the video content diversity at the application layer, together with the spatial path diversity of lens antenna array at the physical layer, to achieve graceful video transmission performance under varying channel conditions.

Read more

Quality Assessment for Omnidirectional Video: A Spatio-Temporal Distortion Modeling Approach

Omnidirectional video, also known as 360-degree video, has become increasingly popular nowadays due to its ability to provide immersive and interactive visual experiences. However, the ultra high resolution and the spherical observation space brought by the large spherical viewing range make omnidirectional video distinctly different from traditional 2D video. To date, the video quality assessment (VQA) for omnidirectional video is still an open issue

Read more

Semantic-Driven Interpretable Deep Multi-Modal Hashing for Large-Scale Multimedia Retrieval

Multi-modal hashing focuses on fusing different modalities and exploring the complementarity of heterogeneous multi-modal data for compact hash learning. However, existing multi-modal hashing methods still suffer from several problems, including: 1) Almost all existing methods generate unexplainable hash codes. They roughly assume that the contribution of each hash code bit to the retrieval results is the same, ignoring the discriminative information embedded in hash learning and semantic similarity in hash retrieval.

Read more

On Reliable Multi-View Affinity Learning for Subspace Clustering

In multi-view subspace clustering, the low-rankness of the stacked self-representation tensor is widely accepted to capture the high-order cross-view correlation. However, using the nuclear norm as a convex surrogate of the rank function, the self-representation tensor exhibits strong connectivity with dense coefficients. When noise exists in the data, the generated affinity matrix may be unreliable for subspace clustering as it retains the connections across inter-cluster samples due to the lack of sparsity.

Read more

LD-MAN: Layout-Driven Multimodal Attention Network for Online News Sentiment Recognition

The prevailing use of both images and text to express opinions on the web leads to the need for multimodal sentiment recognition. Some commonly used social media data containing short text and few images, such as tweets and product reviews, have been well studied. However, it is still challenging to predict the readers’ sentiment after reading online news articles, since news articles often have more complicated structures, e.g., longer text and more images.

Read more

Dense Video Captioning Using Graph-Based Sentence Summarization

Recently, dense video captioning has made attractive progress in detecting and captioning all events in a long untrimmed video. Despite promising results were achieved, most existing methods do not sufficiently explore the scene evolution within an event temporal proposal for captioning, and therefore perform less satisfactorily when the scenes and objects change over a relatively long proposal. To address this problem, we propose a graph-based partition-and-summarization (GPaS) framework for dense video captioning within two stages.

Read more