IEEE TSIPN Special Issue on Learning on Graphs for Biology and Medicine

Manuscript Due: 1 September 2024
Publication Date: March 2025

Artificial Intelligence (AI) techniques, including those based on Deep Neural Networks (DNNs), are revolutionizing the way we process and analyse data. On the negative side, increasing concerns have been raised on the security of AI whenever the presence of an informed adversary aiming at making the system fail cannot be ruled out. As a matter of fact, in the last decade, a wide range of attacks against Machine Learning models have been proposed including the fabrication of targeted and non-targeted adversarial examples, training poisoning, backdoor attacks, model inversion etc. As a reaction, several defenses have been proposed as well, to prevent the implementation of adversarial attacks, or at least to detect their presence so to ease the implementation of efficient countermeasures. In most of the cases, attacks and defenses have been designed to work in laboratory (in vitro) conditions. For instance, adversarial attacks are often carried out in the digital domain, in floating point arithmetic, paying little or no attention to the robustness constraints that they need to face with for real deployments (in vivo conditions). In a similar way, many backdoor attacks are based on non-realistic assumptions about the capabilities of the attacker. As to defenses, again, their effectiveness is often tested in laboratory conditions, on carefully controlled datasets, or by assuming a perfect knowledge of the attack, thus failing to demonstrate their real value in realistic conditions. A direct consequence of the unrealistic assumptions underlying most of the state of the art is that the real threat posed by adversarial attacks against AI risks to be overestimated, and implementing such attacks in operational conditions is often significantly more difficult than expected. At the same time, defenses thought to work in laboratory conditions may not be effective at all when operating in realistic conditions, thus giving a false (and dangerous) sense of security, or, on the contrary, no defense may be necessary in real conditions due to the limited effectiveness of attacks.

Topics of interest: Some topics of interest include (but are not limited to) novel algorithms for biomedical applications on methodological aspects such as:
  • Extension of classical signal processing notions (e.g., graph filtering, graph transforms) to graph-structured data for biomedical data inference and understanding
  • Higher-order graph representations for biomedicine
  • Graph representation learning including graph neural networks for biomedical applications
  • Joint/Multiview graph learning for subtype identification
  • Inference on multilayer networks for robust and invariant representations
  • Learning over heterogeneous graphs (e.g., knowledge graphs) for extracting knowledge from multimodal biomedical data
  • Generative graph models for new biological discoveries
  • Combining mechanistic knowledge from biology with graph representation learning for better inference
  • Foundation models on graphs for biomedicine

We encourage submissions across various data modalities (e.g., single cell gene expression, protein interaction networks, molecular networks, brain networks, biological knowledge graphs, medical imaging) and application domains such as disease understanding (e.g., protein structure prediction), precision medicine (e.g., analysis of gene expression, single-cell transcriptomics and multi-omics data), novel therapeutical development and antibiotic discovery (e.g., drug-drug and/or drug-target interaction prediction), neuroscience (e.g., combining functional and structural connectivity).

  • Special issue announcement: March 1, 2024
  • Manuscripts due: September 1, 2024
  • Review results and decision notification: November 1, 2024 Revised manuscripts due: December 1, 2024
  • Final acceptance notification: January 1, 2025
  • Camera-ready paper due: February 1, 2025
  • Publication date: March 2025

Guest Editors