Call for Papers: IEEE JSTSP Special Series on AI in Signal & Data Science - Toward Large Language Model (LLM) Theory and Applications

You are here

Inside Signal Processing Newsletter Home Page

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

News and Resources for Members of the IEEE Signal Processing Society

Call for Papers: IEEE JSTSP Special Series on AI in Signal & Data Science - Toward Large Language Model (LLM) Theory and Applications

To address rapidly growing interest in artificial intelligence (AI) and machine learning (ML) for signal processing and data science, the IEEE Signal Processing Society (SPS) has launched a new special series on AI in Signal & Data Science, to be published within the IEEE Journal on Selected Topic in Signal Processing (JSTSP).

Starting in 2024, JSTSP included a series of articles on AI in Signal and Data Science. The series aims to serve  as a platform for communicating state-of-the-art AI/ML research for signal and data, highlighting the research challenges that remain unanswered and further exploring innovative principles and solutions to resolving them. Over the past year, we have received tremendous feedback from the community and have published or planned two issues featuring high-quality work on “Towards Explainable, Reliable and Sustainable Machine Learning.”

The special series editorial team reserves the right to recommend submissions that are deemed out of scope or modest fit to be resubmitted to other regular SPS journals for consideration.

While manuscripts can be submitted at any time indicating for this special series, interested authors are strongly encouraged to make their submissions according to the following timetable to be considered for the 2025 issues of the first half of the year.

Important Dates

  • Manuscript Submission: 1 December 2024
  • First Review Due: 1 February 2025
  • Revised Manuscript Due: 1 March 2025
  • Second Review Due: 1 May 2025
  • Final Decision Due: 15 May 2025
  • Publication Date: May/June  2025

Editorial Team

  • Xiao-Ping (Steven) Zhang (EIC), Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
  • Zheng-Hua Tan (Lead), Aalborg University
  • Shuvra Bhattacharyya, University of Maryland
  • Wenbo Ding, Tsinghua University
  • Yonina C. Eldar, Weizmann Institute of Science
  • Maria Sabrina Greco, University of Pisa
  • Zhu Han, the University of Houston
  • Jiebo Luo, University of Rochester
  • Helen Meng, Chinese University of Hong Kong
  • Dacheng Tao, University of Sydney
  • Zhou Wang, University of Waterloo
  • Aylin Yener, Ohio State University
  • Dong Yu, Tencent AI Lab
  • Junsong Yuan, University at Buffalo

As we move forward, we continue to invite submissions of high-quality manuscripts in relevant emerging sub-topics. We seek original papers that have not been published previously and are not currently under review by any other publication venues. The initial scope of the 2025 issues in this series includes cutting-edge AI areas relevant to the broader signal and data science communities, specifically those concerning Large Language Model (LLM) based signal & data science, both in theory and applications.

This includes, but is not limited to, the following topics:

Multimodal Large Language Models: This covers advanced AI systems capable of processing, understanding, and generating content across multiple modalities, such as text, images, speech, audio, and video.

Theoretical Foundations: This area focuses on the underlying principles and methods for the designing, training, and applications of LLMs.

Training Strategies: This topic addresses the methodologies related to the training of LLMs, particularly those that tackle challenges about scalability, data and computational resource efficiency, and generalization ability. 

Fine-tuning and Adaptation: This topic explores in-weights learning methods for fine-tuning LLMs to specific tasks or domains for enhanced performance.

In-context Learning and Prompt Engineering: This subject focuses on strategies for in-context learning, where LLMs utilize contextual information to make predictions, and prompt engineering, which involves the crafting of promputs to effectively guide LMMs towards generating the desired outputs.

Reasoning Abilities: This research area analyzes the reasoning capabilities of LLMs for logic, with a focus on the Chain of Thought approach.

Causal Reasoning: This subject focuses on exploring LLM’s capabilities in causal tasks, including knowledge-based causal discovery, LLM-based causal inference, human-LLM collaboration, and understanding and improving causal reasoning.  

Explainability and Interpretability: This topic is dedicated to understanding and interpreting the outputs of LLMs to make these models transparent, interpretable, and accountable.

Emerging Applications: This topic focuses on the potential killer applications powered by LLMs, including human-machine interfaces, robotics, embodied intelligence, and others.

SPS ON X

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel