Distinguished Lecture: 31 March 2023, Andreas Stolcke (Amazon Alexa Speech)
Date: 31 March 2023
Chapter: Atlanta Chapter
Chapter Chair: Wendy Newcomb
Title: Improving Fairness in Speaker Recognition and Speech Recognition
Access virtually/online:
3:00 PM | (UTC-04:00) Eastern Time (US & Canada) | 2 hrs
Meeting number: 2535 026 0259
Meeting password: qgMA8PSte32
You can also dial 173.243.2.68 and enter your meeting number.
To dial from an IEEE Video Conference System: *1 2535 026 0259
Tap to join from a mobile device (attendees only)
+1-415-655-0002,,25350260259## United States Toll
1-855-282-6330,,25350260259## United States Toll Free
Biography
Andreas Stolcke is senior principal scientist in the Alexa Speech organization at Amazon. He obtained his PhD from UC Berkeley and then worked as a researcher at SRI International and Microsoft, before joining Amazon. His research interests include computational linguistics, language modeling, speech recognition, speaker recognition and diarization, and paralinguistics, with over 300 papers and patents in these areas. His open-source SRI Language Modeling Toolkit was widely used in academia (before becoming obsolete by virtue of deep neural network models). Andreas is a Fellow of the IEEE and the International Speech Communication Association, and giving this talk as an IEEE Distinguished Industry Speaker.
Abstract
Group fairness, or avoidance of large performance disparities for different cohorts of users, is a major concern as AI technologies find adoption in ever more application scenarios. In this talk I will present some recent work on fairness for speech-based technologies, specifically, speaker recognition and speech recognition. For speaker recognition, I report on two algorithmic approaches to reduce performance variability across different groups. In the first method, group-adapted fusion, we combine sub-models that are specialized for subpopulations that have very different representation (and therefore performance) in the data. The second method, adversarial reweighting, forces the model to focus on those portions of the population that are harder to recognize, without requiring a priori labels for speaker groups. For automatic speech recognition, I present methods for detecting and mitigating accuracy disparities as a function of geographic or demographic variables, principally by oversampling or adaptation based on group membership. The talk concludes with an application of synthetic speech generation (TTS) for filling in data gaps for a group of speakers with atypical speech, namely, stutter.