- Home
- Publications & Resources
- IEEE Signal Processing Magazine
IEEE Signal Processing Magazine
CURRENT ISSUE
CURRENT ISSUE
July 2022
Trusting in the Sciences Requires Explainability
The July issue of IEEE Signal Processing Magazine (SPM) is a special issue focused on “Explainability in Data Science: Interpretability, Reproducibility, and Replicability.” With increased enthusiasm for machine learning, it is a very timely topic, and I invite every IEEE Signal Processing Society (SPS) member to read these very instructive papers.
Explainability in Graph Data Science: Interpretability, replicability, and reproducibility of community detection
In many modern data science problems, data are represented by a graph (network), e.g., social, biological, and communication networks. Over the past decade, numerous signal processing and machine learning (ML) algorithms have been introduced for analyzing graph structured data. With the growth of interest in graphs and graph-based learning tasks in a variety of applications, there is a need to explore explainability in graph data science.
Interpretability, Reproducibility, and Replicability
Most of the work we do in signal processing these days is data driven. The shift from the more traditional and model-driven approaches to those that are data driven has also underlined the importance of explainability of our solutions. Because most traditional signal processing approaches start with a number of modeling assumptions, they are comprehensible by the very nature of their construction.
Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective
In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural networks. Gaining a better understanding is especially important, e.g., for safety-critical ML applications or medical diagnostics and so on. Although such explainable artificial intelligence (XAI) techniques have reached significant popularity for classifiers, thus far, little attention has been devoted to XAI for regression models (XAIR).
May 2022
Self-Supervised Representation Learning: Introduction, advances, and challenges
Self-supervised representation learning (SSRL) methods aim to provide powerful, deep feature learning without the requirement of large annotated data sets, thus alleviating the annotation bottleneck-one of the main barriers to the practical deployment of deep learning today. These techniques have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pretraining alternatives across a variety of data modalities, including image, video, sound, text, and graphs.
