Skip to main content

Predict-and-Update Network: Audio-Visual Speech Recognition Inspired by Human Speech Perception

By
Jiadong Wang; Xinyuan Qian; Haizhou Li

Audio and visual signals complement each other in human speech perception, and the same applies to automatic speech recognition. The visual signal is less evident than the acoustic signal, but more robust in a complex acoustic environment, as far as speech perception is concerned. It remains a challenge how we effectively exploit the interaction between audio and visual signals for automatic speech recognition. There have been studies using visual signals as redundant or complementary information to audio input in a synchronous manner. However, human studies suggest another mechanism that visual signal primes the listener in advance, indicating when and which frequency to attend to. To simulate such a visual cueing mechanism, we propose a Predict-and-Update Network (P&U net) for Audio-Visual Speech Recognition (AVSR). In particular, we first predict the character posteriors of the spoken words, i.e. the visual embedding, based on the visual signal. The audio signal is then conditioned on the visual embedding via a novel cross-modal Conformer, that updates the character posteriors. We validate the effectiveness of the visual cueing mechanism through extensive experiments. The proposed P&U net outperforms the state-of-the-art AVSR methods on both LRS2-BBC and LRS3-BBC datasets, with the relative Word Error Rate (WER) reductions exceeding 10% and 40% under clean and noisy conditions, respectively.