SPS-DSI (DEGAS) Webinar: Low Distortion Embedding with Bottom-up Manifold Learning
Date: 22 May 2024
Time: 5:00 PM (Paris Time)
Presenter(s): Dr. Gal Mishne
DEGAS Webinar Series is an event initiated by the Data Science Initiative (DSI) of the IEEE Signal Processing (SP) Society. The goal is to provide the SP community with updates and advances in learning and inference on graphs. Signal processing and machine learning often deal with data living in regular domains such as space and time. This webinar series will cover the extension of these methods to network data, including topics such as graph filtering, graph sampling, spectral analysis of network data, graph topology identification, geometric deep learning, and so on. Applications can for instance be found in image processing, social networks, epidemics, wireless communications, brain science, recommender systems, and sensor networks. These bi-weekly webinars will be hosted on Zoom, with recordings made available in the IEEE Signal Processing Society’s YouTube channel following the live events. Further details about live and streaming access will follow. Each webinar speaker will give a lecture, which is followed by Q&A and discussions.
Abstract
Manifold learning algorithms aim to map high-dimensional data into lower dimensions while preserving local and global structure, however popular methods distort distances between points in the low-dimensional space. In this talk, I present a bottom-up manifold learning framework that constructs low-distortion local views of a dataset in lower dimensions and registers these to obtain a global embedding. Our global alignment formulation enables tearing manifolds so as to embed them into their intrinsic dimension, including manifolds without boundary and non-orientable manifolds. To quantitatively evaluate the quality of low-dimensional embeddings, we present a new strong and weak notion of global distortion. We show that Riemannian Gradient Descent (RGD) converges to a global embedding with guaranteed low global distortion. Compared to competing manifold learning and data visualization approaches, our framework achieves the lowest local and global distortion, as well as the lowest reconstruction error in downstream decoding tasks, on synthetic and real-world neuroscience datasets. Joint work with Dhruv Kohli, Alex Cloninger, Bas Nieuwenhuis and Devika Narain.
Biography
Gal Mishne received the B. Sc. degree (summa cum laude) in electronical engineering and physics and the Ph.D. degree in electrical engineering, both from the Technion–Israel Institute of Technology, Haifa, in 2009 and 2017 respectively.
She is an assistant professor in the Halıcıoğlu Data Science Institute (HDSI) at UC San Diego, and affiliated with the ECE department, the CSE department and the Neurosciences Graduate program. Before joining UCSD, Dr. Mishne was a Gibbs Assistant Professor in the Applied Math program at Yale University, with Prof. Ronald Coifman’s research group. Upon graduation she worked as an image processing engineer for several years.
Her research interests include high-dimensional data analysis, geometric representation learning, image processing and computational neuroscience. Dr. Mishne is a 2017 Rising Star in EECS and Emerging Scholar in Science.