IEEE JSTSP Special Series on AI in Signal & Data Science - Toward Large Language Model (LLM) Theory and Applications

Manuscript Due: 31 December 2024

Call for Papers

News on JSTSP AI/ML Special Series – Toward Large Language Model (LLM) Theory and Applications 

 

Good news! You asked and we listened. With more than one issue being planned in 2025 for the new JSTSP AI/ML special series, we will start to accept rolling submissions. Manuscripts received before December 31, 2024 have the potential to be considered for the first issue of this series. Manuscripts submitted afterward are likely to be considered for a later issue (details to be announced later).

 

Journal version of previous conference papers: Authors of SPS conferences and workshops are welcome to develop their paper into a comprehensive journal version. Their submission will be handled in the special series based on the general SPS and JSTSP publication policies.

 

The special series offers a new fast-track opportunity for authors of recent peer-reviewed conference papers accepted by the main program of highly competitive conferences (such as in NeurIps, ICML, ICLR, CVPR, AAAI, etc.). Manuscripts that are in the JSTSP special series’ scope and further developed by the authors from these conference papers may be submitted for special series’ consideration.  Since the journal version is intended to be the definitive, archival version of the research, JSTSP expects that the authors will take this opportunity to further improve their conference paper.  Read more about the instructions and guidelines below for this fast-track opportunity. 

 

Please note that IEEE requires that the journal paper be a “substantial revision” of the previous publication (30 percent is generally considered “substantial”). JSTSP interprets and applies the IEEE requirement on a case-by-case basis with appropriate deference to the author’s viewpoint. Examples of the improvements we expect to see over the conference paper include the following: additional technical details, thorough critical analysis and discussions, and more experiments if appropriate, or an updated state-of-the-art. Authors are encouraged to make the journal version a significant advancement in terms of technical content over the initial conference paper. 

 

To facilitate our piloting an expedited review process, please include a letter explaining the main improvements over the conference paper, and how the current journal submission addresses the comments from past conference reviews and other inputs if any.  Verbatim reviews from the respective competitive conference should be attached as auxiliary material to facilitate the expedited handling by the editor team; authors should not make inappropriate alternations of the reviews or omission of critical reviews from the past conference.

 

Suggested reviewers:  For each submission, we welcome suggestions of potential reviewers from authors’ understanding of the technical relevance. Names and contact information of independent reviewers who do not have obvious ties to the author team would be particularly helpful; if there is any potential conflict of interests about any suggested names, please explain. 

 

Important Dates

  • Manuscript Submission: 31 December 2024
  • First Review Due: 1 February 2025
  • Revised Manuscript Due: 1 March 2025
  • Second Review Due: 1 May 2025
  • Final Decision Due: 15 May 2025
  • Publication Date: May/June  2025

Editorial Team

  • Xiao-Ping (Steven) Zhang (EIC), Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
  • Zheng-Hua Tan (Lead), Aalborg University
  • Shuvra Bhattacharyya, University of Maryland
  • Wenbo Ding, Tsinghua University
  • Maria Sabrina Greco, University of Pisa
  • Zhu Han, the University of Houston
  • Jiebo Luo, University of Rochester
  • Helen Meng, Chinese University of Hong Kong
  • Dacheng Tao, University of Sydney
  • Zhou Wang, University of Waterloo
  • Aylin Yener, Ohio State University
  • Dong Yu, Tencent AI Lab
  • Junsong Yuan, University at Buffalo

Starting in 2024, JSTSP included a series of articles on AI in Signal and Data Science. The series aims to serve  as a platform for communicating state-of-the-art AI/ML research for signal and data, highlighting the research challenges that remain unanswered and further exploring innovative principles and solutions to resolving them. Over the past year, we have received tremendous feedback from the community and have published or planned two issues featuring high-quality work on “Towards Explainable, Reliable and Sustainable Machine Learning.”

As we move forward, we continue to invite submissions of high-quality manuscripts in relevant emerging sub-topics. We seek original papers that have not been published previously and are not currently under review by any other publication venues. The initial scope of the 2025 issues in this series includes cutting-edge AI areas relevant to the broader signal and data science communities, specifically those concerning Large Language Model (LLM) based signal & data science, both in theory and applications.

This includes, but is not limited to, the following topics:

Multimodal Large Language Models: This covers advanced AI systems capable of processing, understanding, and generating content across multiple modalities, such as text, images, speech, audio, and video.

Theoretical Foundations: This area focuses on the underlying principles and methods for the designing, training, and applications of LLMs.

Training Strategies: This topic addresses the methodologies related to the training of LLMs, particularly those that tackle challenges about scalability, data and computational resource efficiency, and generalization ability. 

Fine-tuning and Adaptation: This topic explores in-weights learning methods for fine-tuning LLMs to specific tasks or domains for enhanced performance.

In-context Learning and Prompt Engineering: This subject focuses on strategies for in-context learning, where LLMs utilize contextual information to make predictions, and prompt engineering, which involves the crafting of promputs to effectively guide LMMs towards generating the desired outputs.

Reasoning Abilities: This research area analyzes the reasoning capabilities of LLMs for logic, with a focus on the Chain of Thought approach.

Causal Reasoning: This subject focuses on exploring LLM’s capabilities in causal tasks, including knowledge-based causal discovery, LLM-based causal inference, human-LLM collaboration, and understanding and improving causal reasoning.  

Explainability and Interpretability: This topic is dedicated to understanding and interpreting the outputs of LLMs to make these models transparent, interpretable, and accountable.

Emerging Applications: This topic focuses on the potential killer applications powered by LLMs, including human-machine interfaces, robotics, embodied intelligence, and others.