SPS-DSI (DEGAS) Webinar: Regularity of Graph Neural Networks

Date: 13 March 2024
Time: 1:00 PM (Paris Time)
Presenter(s): Ron Levie

DEGAS Webinar Series is an event initiated by the Data Science Initiative (DSI) of the IEEE Signal Processing (SP) Society. The goal is to provide the SP community with updates and advances in learning and inference on graphs. Signal processing and machine learning often deal with data living in regular domains such as space and time. This webinar series will cover the extension of these methods to network data, including topics such as graph filtering, graph sampling, spectral analysis of network data, graph topology identification, geometric deep learning, and so on. Applications can for instance be found in image processing, social networks, epidemics, wireless communications, brain science, recommender systems, and sensor networks. These bi-weekly webinars will be hosted on Zoom, with recordings made available in the IEEE Signal Processing Society’s YouTube channel following the live events. Further details about live and streaming access will follow. Each webinar speaker will give a lecture, which is followed by Q&A and discussions.

Abstract

In recent years, graph neural networks (GNNs) have led to ground-breaking achievements in the applied sciences and industry. These achievements pose exciting theoretical challenges: can the success of GNNs be grounded in solid mathematical frameworks? A GNN is a function that takes graphs (with node features) and returns vectors in R^n. Since the input space of a GNN is non-Euclidean, i.e., graphs can be of any degree and any topology, less is known about GNNs than standard neural-networks. In this talk, we claim that past theories of GNNs were missing an important ingredient: meaningful notions of metric on the input space, namely, graph similarity measures that are defined for all graphs of any degree, which respect and describe in some sense the behavior of GNNs. We will show how to construct and analyze such graph metrics using graphon theory and Szemerédi's regularity lemma. This will show that GNNs behave regularly in some sense, which leads to generalization bounds, universal approximation theorems, expressivity results, and novel GNN designs.

Biography

Ron Levie is a Senior Lecturer (Assistant Professor) at the Faculty of Mathematics, Technion – Israel Institute of Technology. Until 2022 he was a postdoc researcher in the Mathematical Foundations of Artificial Intelligence Group, Mathematical Institute, Ludwig Maximilian University of Munich. Before that, he was a postdoc at the Research Group Applied Functional Analysis, Institute of Mathematics, Technische Universität Berlin. Ron Levie did his Ph.D. in applied mathematics at Tel-Aviv University, under the supervision of Prof. Nir Sochen. His research interests are in the areas of mathematics of deep learning, graph deep learning, explainability in deep learning, application areas of deep learning like wireless communication, applied harmonic analysis, time-frequency and wavelet analysis, and randomized algorithms.