SPM July 2022

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

SPM July 2022

The July issue of IEEE Signal Processing Magazine (SPM) is a special issue focused on “Explainability in Data Science: Interpretability, Reproducibility, and Replicability.” With increased enthusiasm for machine learning, it is a very timely topic, and I invite every IEEE Signal Processing Society (SPS) member to read these very instructive papers.
In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural networks. Gaining a better understanding is especially important, e.g., for safety-critical ML applications or medical diagnostics and so on. Although such explainable artificial intelligence (XAI) techniques have reached significant popularity for classifiers, thus far, little attention has been devoted to XAI for regression models (XAIR). 
In many modern data science problems, data are represented by a graph (network), e.g., social, biological, and communication networks. Over the past decade, numerous signal processing and machine learning (ML) algorithms have been introduced for analyzing graph structured data. With the growth of interest in graphs and graph-based learning tasks in a variety of applications, there is a need to explore explainability in graph data science.
Data-driven solutions are playing an increasingly important role in numerous practical problems across multiple disciplines. The shift from the traditional model-driven approaches to those that are data driven naturally emphasizes the importance of the explainability of solutions, as, in this case, the connection to a physical model is often not obvious. Explainability is a broad umbrella and includes interpretability, but it also implies that the solutions need to be complete, in that one should be able to “audit” them, ask appropriate questions, and hence gain further insight about their inner workings.
Most of the work we do in signal processing these days is data driven. The shift from the more traditional and model-driven approaches to those that are data driven has also underlined the importance of explainability of our solutions. Because most traditional signal processing approaches start with a number of modeling assumptions, they are comprehensible by the very nature of their construction.

SPS on Twitter

  • Join Dr. Peilan Wang and Dr Jun Fang for "Channel State Information Acquisition for Intelligent Reflecting Surface-… https://t.co/jOhyA10xuG
  • The SPS Webinar Series continues on Monday, 10 October when Dr. Luisa Verdoliva presents "Media Forensics and DeepF… https://t.co/aInDvTSQZc
  • DEADLINE EXTENDED: The IEEE Transactions on Multimedia is accepting submissions for a Special Issue on Point Cloud… https://t.co/UqoOXUd8BH
  • Short courses return to ! Register for live and remote sessions, "A Hands-on Approach for Implementing Sto… https://t.co/qMoR6iqp4F
  • Join Dr. Sabyasachi Ghosh on Wednesday, 21 September for a new SPS Webinar, “Tapestry: A Compressed Sensing Approac… https://t.co/MNhu1kBmxG

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar