Mar
09
Date: 09-March-2026
Time: 9:00 AM ET (New York Time)
Presenter: Dr. Davide Fantini
Based on the IEEE Xplore® article:
A Survey on Machine Learning Techniques for Head-Related Transfer Function Individualization Published: IEEE Open Journal of Signal Processing, January 2025
Download article: Original article is open access and publicly available for download. ARTICLE LINK
Abstract
Immersive spatial audio is essential for realism in applications ranging from virtual reality to teleconferencing. Central to this experience are Head-Related Transfer Functions (HRTFs), which capture how an individual’s anatomy filters sound, enabling its spatialized rendering in a 3D virtual environment. However, using generic HRTFs often leads to poor localization and unrealistic audio. While HRTF acoustic measurement provides the most accurate personalization, it is impractical for mass adoption, paving the way for machine learning solutions.
This webinar presents a comprehensive survey of machine learning techniques for HRTF individualization. We will examine the complete machine learning workflow, systematically categorizing existing methods by the data they employ—such as anthropometric measurements and ear images—preprocessing steps, model architectures—ranging from linear regression to deep neural networks—and their evaluation. Furthermore, the presentation will address critical challenges in validation and the current lack of standardization in the field. Attendees will gain insights into the state of the art of data-driven spatial audio, the limitations of current benchmarks, and promising avenues for future research in creating personalized auditory experiences.
Biography
Davide Fantini received the M.Sc. and Ph.D. degrees in computer science from the University of Milan, Milan, Italy, in 2019 and 2024, respectively.
He is currently a Research Fellow in the Department of Computer Science at the University of Milan. His research interests include machine learning, signal processing, extended reality (XR), and their application to spatial audio, with a focus on head-related transfer functions (HRTFs), binaural rendering, artificial reverberation, and their influence on auditory spatial perception.
Dr. Fantini is involved in the SONICOM project, funded under the EU’s Horizon 2020 programme, which leverages artificial intelligence to design immersive audio technologies.
