This paper addresses the problem of joint downlink channel estimation and user grouping in massive multiple-input multiple-output (MIMO) systems, where the motivation comes from the fact that the channel estimation performance can be improved if we exploit additional common sparsity among nearby users. In the literature, a commonly used group sparsity model assumes that users in each group share a uniform sparsity pattern. In practice, however, this oversimplified assumption usually fails to hold, even for physically close users.
In this paper, we propose spatial filters for a linear regression model, which are based on the minimum-variance pseudo-unbiased reduced-rank estimation (MV-PURE) framework. As a sample application, we consider the problem of reconstruction of brain activity from electroencephalographic (EEG) or magnetoencephalographic (MEG) measurements.
Multiple-input multiple-output (MIMO) radar is known for its superiority over conventional radar due to its antenna and waveform diversity. Although higher angular resolution, improved parameter identifiability, and better target detection are achieved, the hardware costs (due to multiple transmitters and multiple receivers) and high-energy consumption (multiple pulses) limit the usage of MIMO radars in large scale networks.
The problem of quickest detection of a change in distribution is considered under the assumption that the pre-change distribution is known, and the post-change distribution is only known to belong to a family of distributions distinguishable from a discretized version of the pre-change distribution.
We address the downlink channel estimation problem for massive multiple-input multiple-output (MIMO) systems in this paper, where the inherit burst-sparsity structure is exploited to improve the channel estimation performance. In the literature, the commonly used burst-sparsity model assumes a uniform burst-sparse structure in which all bursts have similar sizes.
Model order selection (MOS) in linear regression models is a widely studied problem in signal processing. Penalized log likelihood techniques based on information theoretic criteria (ITC) are algorithms of choice in MOS problems. Recently, a number of model selection problems have been successfully solved with explicit finite sample guarantees using a concept called residual ratio thresholding (RRT).
Motivated by the many applications associated with estimation of sparse multivariate models, the estimation of sparse directional connectivity between the imperfectly measured nodes of a network is studied. Node dynamics and interactions are assumed to follow a multivariate autoregressive model driven by noise, and the observations are a noisy linear combination of the underlying node activities.
We consider the problem of detecting the presence of a complex-valued, possibly improper, but unknown signal, common among two or more sensors (channels) in the presence of spatially independent, unknown, possibly improper and colored, noise. Past work on this problem is limited to signals observed in proper noise.
In the last decade, a large number of techniques have been proposed to ensure integrity and authenticity of data in security-oriented applications, e.g. multime-dia forensics, biometrics, watermarking and information hiding, network intrusion detection, reputation systems, etc.... The development of these methods has re-ceived a new boost in the last few years with the advent of Deep Learning (DL) techniques and Convolutional Neural Networks (CNNs).