July 2022

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

2022

Volume 39 | Issue 4

The July issue of IEEE Signal Processing Magazine (SPM) is a special issue focused on “Explainability in Data Science: Interpretability, Reproducibility, and Replicability.” With increased enthusiasm for machine learning, it is a very timely topic, and I invite every IEEE Signal Processing Society (SPS) member to read these very instructive papers.
In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural networks. Gaining a better understanding is especially important, e.g., for safety-critical ML applications or medical diagnostics and so on. Although such explainable artificial intelligence (XAI) techniques have reached significant popularity for classifiers, thus far, little attention has been devoted to XAI for regression models (XAIR). 
In many modern data science problems, data are represented by a graph (network), e.g., social, biological, and communication networks. Over the past decade, numerous signal processing and machine learning (ML) algorithms have been introduced for analyzing graph structured data. With the growth of interest in graphs and graph-based learning tasks in a variety of applications, there is a need to explore explainability in graph data science.
Data-driven solutions are playing an increasingly important role in numerous practical problems across multiple disciplines. The shift from the traditional model-driven approaches to those that are data driven naturally emphasizes the importance of the explainability of solutions, as, in this case, the connection to a physical model is often not obvious. Explainability is a broad umbrella and includes interpretability, but it also implies that the solutions need to be complete, in that one should be able to “audit” them, ask appropriate questions, and hence gain further insight about their inner workings.
Most of the work we do in signal processing these days is data driven. The shift from the more traditional and model-driven approaches to those that are data driven has also underlined the importance of explainability of our solutions. Because most traditional signal processing approaches start with a number of modeling assumptions, they are comprehensible by the very nature of their construction.

SPS on Twitter

  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now accepting papers for a Special Iss… https://t.co/N0Dp4VBSAc
  • Join us tomorrow, Wednesday, 17 August at 10 AM Eastern as the SPS Webinar Series continues with Dr. Quiqiang Kong… https://t.co/aE2bQQAP99
  • The 2023 IEEE membership year begins today, which means that new members can join now and receive service through 3… https://t.co/WIGwc9iJCq
  • On 15 September 2022, we are excited to partner with and to bring you a webinar and roundtable,… https://t.co/we14OUl2QV
  • The SPS Webinar Series continues on Monday, 22 August when Dr. Yu-Huan Wu and Dr. Shanghua Gao present “Towards Des… https://t.co/ZkHjQLLn7L

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar