The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Most of the work we do in signal processing these days is data driven. The shift from the more traditional and model-driven approaches to those that are data driven has also underlined the importance of explainability of our solutions. Because most traditional signal processing approaches start with a number of modeling assumptions, they are comprehensible by the very nature of their construction. However, this is not necessarily the case when we choose to rely more heavily on the data and minimize modeling assumptions.
Explainability is critical not only for the simple reason that one would like to have confidence over solutions, but also because one would like to obtain further insights into the problem from learned models. This includes interpretability and completeness so that one can not only “audit” them but also ask appropriate questions to probe for insights beyond the initial solution, and address additional concerns such as safety, fairness, and reliability. Interpretability, i.e., the ability to attach a physical meaning to a solution, along with reproducibility and replicability are three key aspects of explainability. Following definitions from the National Academies of Sciences, Engineering, and Medicine, reproducibility refers to obtaining consistent results using the same data and code-i.e., method,-as the original study, and replicability is obtaining consistent results across studies aimed at answering the same scientific question using new data or other computational methods.
In this special issue of IEEE Signal Processing Magazine, we have nine articles that demonstrate the multifaceted nature of explainability, and span the related concepts of interpretability, reproducibility, and replicability. They successfully exhibit the rich nature of these concepts while also highlighting the fact that they take on slightly different meanings in various contexts, and the considerations might be slightly different as well. These articles also emphasize the fact that explainability is a key theme requiring attention across different application domains and types of solutions, i.e., well beyond neural networks, where they have been mostly emphasized to date.
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.