Call for Papers: IEEE JSTSP Special Series on AI in Signal & Data Science - Toward Explainable, Reliable, and Sustainable Machine Learning

You are here

Inside Signal Processing Newsletter Home Page

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

News and Resources for Members of the IEEE Signal Processing Society

Call for Papers: IEEE JSTSP Special Series on AI in Signal & Data Science - Toward Explainable, Reliable, and Sustainable Machine Learning

To address rapidly growing interest in artificial intelligence (AI) and machine learning (ML) for signal processing and data science, the IEEE Signal Processing Society (SPS) is launching a new special series on AI in Signal & Data Science, to be published within the IEEE Journal on Selected Topic in Signal Processing (JSTSP).

Starting in 2024, JSTSP will include a series of articles on AI in Signal and Data Science. The series will serve as a platform for communicating state-of-the-art AI/ML research for signal and data, highlighting the research challenges that remain unanswered and further exploring innovative principles and solutions to resolving them.

Accordingly, we invite the submission of high-quality manuscripts in the relevant emerging sub-topics, papers which have not been published previously and are not currently under review by any publication venues. The initial scope of this series includes cutting-edge AI areas relevant to the broader signal and data science communities such as the following topics.

Explainable Machine Learning (XML) is an area of research focused on making AI models transparent, interpretable, and accountable, including but not limited to the following common topics.

  • Feature importance: Understanding the factors that contribute to a model’s predictions.
  • Model interpretability: Making a model's internal workings understandable to humans.
  • Attribution methods: Assigning contribution scores to features for a prediction. 
  • LIME (Local Interpretable Model-Agnostic Explanations): A method for explaining the predictions of any black-box classifier.
  • SHAP (SHapley Additive exPlanations): A unified framework for interpreting the output of any machine learning model.
  • Counterfactual analysis: Generating hypothetical scenarios to understand how changes in inputs would affect a model's predictions.
  • Model distillation: Transferring the knowledge of a complex model into a smaller, simpler, and more interpretable model.
  • Adversarial examples: Examining how machine learning models can be misled by carefully crafted inputs.
  • Fairness and bias: Ensuring that AI models do not perpetuate biases in society.

Reliable Machine Learning (RML) refers to the development of machine learning models that are robust, accurate, and able to generalize well to new data, including but not limited to the following common topics.

  • Model selection: Choosing the best machine learning algorithm for a given task based on performance metrics and computational requirements.
  • Overfitting: Avoiding the situation where a model performs well on training data but poorly on new data.
  • Regularization: Adding constraints to a model to prevent overfitting and improve generalization.
  • Cross-validation: Evaluating a model's performance using multiple subsets of the data to obtain a more accurate estimate of its performance.
  • Hyperparameter tuning: Selecting the best values for the parameters of a model to optimize its performance.
  • Model ensembles: Combining multiple models to improve performance and reduce overfitting.
  • Data augmentation: Synthetically generating additional data to increase the size of the training dataset and improve generalization.
  • Transfer learning: Reusing pre-trained models on new tasks to speed up training and improve performance.
  • Anomaly detection: Detecting instances in the data that deviate from the norm and may indicate errors or outliers.
  • Unsupervised learning: Learning patterns in the data without labeled examples, for tasks such as clustering and dimensionality reduction.

Sustainable Machine Learning (SML) refers to the development and deployment of machine learning models that have a minimal negative environmental impact on the society, including but not limited to the following common topics.

  • Energy-efficient computing: Designing and training machine learning models that require minimal computational resources, reducing energy consumption and carbon emissions.
  • Privacy and security: Protecting sensitive information used in machine learning, such as personal data, to maintain privacy and security.
  • Responsible data collection and usage: Collecting and using data in an ethical and responsible manner, avoiding exploitation and harm.
  • Model waste: Reducing the waste generated by machine learning models, such as over-provisioned computing resources and redundant models.
  • Deployment and maintenance: Ensuring that machine learning models are deployed and maintained in a sustainable manner, minimizing their impact on the environment and society.

The special series editorial team reserves the right to recommend submissions that are deemed out of scope or modest fit to be resubmitted to other regular SPS journals for consideration.

While manuscripts can be submitted at any time indicating for this special series, interested authors are strongly encouraged to make their submissions according to the following timetable to be considered for the inaugural 2024 issues of the first half year.  The timeline for additional articles in the series will be announced.

Important Dates

  • Manuscript Submission: July 1, 2023
  • First Review Due: September 15, 2023
  • Revised Manuscript Due: October 15, 2023
  • Second Review Due: November 15, 2023
  • Final Decision Due: December 15, 2023
  • Publication Date: First Half of  2024

Editorial Team

  • Xiao-Ping (Steven) Zhang (EIC-JSTSP), Toronto Metropolitan University & Tsinghua-Berkeley Shenzhen Institute
  • Bhuvana Ramabhadran (Lead), Google
  • Wenbo Ding, Tsinghua University
  • Yonina C. Eldar, Weizmann Institute of Science
  • Maria Sabrina Greco, University of Pisa
  • Zhu Han, the University of Houston
  • Yi Ma, UC Berkeley and Hong Kong University
  • Helen Meng, Chinese University of Hong Kong
  • Zheng-Hua Tan, Aalborg University
  • Dacheng Tao, University of Sydney
  • Zhou Wang, University of Waterloo

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel