1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Inference tasks in signal processing are often characterized by the availability of reliable statistical modeling with some missing instance-specific parameters. One conventional approach uses data to estimate these missing parameters and then infers based on the estimated model. Alternatively, data can also be leveraged to directly learn the inference mapping end to end. These approaches for combining partially known statistical models and data in inference are related to the notions of generative and discriminative models used in the machine learning literature [1], [2], typically considered in the context of classifiers.
The goal of this “Lecture Notes” column is to introduce the concepts of generative and discriminative learning for inference with a partially known statistical model. While machine learning systems often lack the interpretability of traditional signal processing methods, we focus on a simple setting where one can interpret and compare the approaches in a tractable manner that is accessible and relevant to signal processing readers. In particular, we exemplify the approaches for the task of Bayesian signal estimation in a jointly Gaussian setting with the mean-square error (MSE) objective, i.e., a linear estimation setting. Here, the discriminative end-to-end approach directly learns the linear minimum MSE (LMMSE) estimator, while the generative strategy yields a two-stage estimator, which first uses data to fit the linear model and then formulates the LMMSE estimator for the fitted model. The ability to derive these estimators in closed form facilitates their analytical comparison. It is rigorously shown that discriminative learning results in an estimate that is more robust to mismatches in the mathematical description of the setup. Generative learning, which utilizes prior knowledge on the distribution of the signals, can exploit this prior to achieve improved MSE in some settings. These analytical findings are numerically demonstrated in a numerical study, which is available online as a Python Notebook, such that it can be presented alongside the lecture detailed in this note.
Signal processing algorithms traditionally rely on mathematical models for describing the problem at hand. These models correspond to domain knowledge obtained from, e.g., established statistical models and understanding of the underlying physics. In practice, statistical models often include parameters that are unknown in advance, such as noise levels and channel coefficients, and are estimated from data.
Recent years have witnessed the dramatic success of machine learning and, particularly, of deep learning in domains such as computer vision and natural language processing [3]. For inference tasks, these data-driven methods typically learn the inference rule directly from data rather than estimating missing parameters in the underlying model, and they can operate without any mathematical modeling. Nonetheless, when one has access to some level of domain knowledge, it can be harnessed to design inference rules that benefit over black-box approaches in terms of the performance, interpretability, robustness, complexity, and flexibility [4]. This is achieved by formulating the suitable inference rule given full domain knowledge and then using data to optimize the resulting solver directly with various methodologies, including learned optimization [5], deep unfolding [6], and the augmentation of classic algorithms with trainable modules [7].