Skip to main content

TSP Volume 68 | 2020

Learning Mixtures of Separable Dictionaries for Tensor Data: Analysis and Algorithms

This work addresses the problem of learning sparse representations of tensor data using structured dictionary learning. It proposes learning a mixture of separable dictionaries to better capture the structure of tensor data by generalizing the separable dictionary learning model. Two different approaches for learning mixture of separable dictionaries are explored and sufficient conditions for local identifiability of the underlying dictionary are derived in each case.

Read more

On the Sample Complexity of Graphical Model Selection From Non-Stationary Samples

We study conditions that allow accurate graphical model selection from non-stationary data. The observed data is modelled as a vector-valued zero-mean Gaussian random process whose samples are uncorrelated but have different covariance matrices. This model contains as special cases the standard setting of i.i.d. samples as well as the case of samples forming a stationary time series.

Read more

Tensor Completion from Regular Sub-Nyquist Samples

Signal sampling and reconstruction is a fundamental engineering task at the heart of signal processing. The celebrated Shannon-Nyquist theorem guarantees perfect signal reconstruction from uniform samples, obtained at a rate twice the maximum frequency present in the signal. Unfortunately a large number of signals of interest are far from being band-limited. 

Read more