End-to-End Audiovisual Speech Recognition System With Multitask Learning

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

End-to-End Audiovisual Speech Recognition System With Multitask Learning

By: 
Fei Tao; Carlos Busso

An automatic speech recognition (ASR) system is a key component in current speech-based systems. However, the surrounding acoustic noise can severely degrade the performance of an ASR system. An appealing solution to address this problem is to augment conventional audio-based ASR systems with visual features describing lip activity. This paper proposes a novel end-to-end, multitask learning (MTL), audiovisual ASR (AV-ASR) system. A key novelty of the approach is the use of MTL, where the primary task is AV-ASR, and the secondary task is audiovisual voice activity detection (AV-VAD). We obtain a robust and accurate audiovisual system that generalizes across conditions. By detecting segments with speech activity, the AV-ASR performance improves as its connectionist temporal classification (CTC) loss function can leverage from the AV-VAD alignment information. Furthermore, the end-to-end system learns from the raw audiovisual inputs a discriminative high-level representation for both speech tasks, providing the flexibility to mine information directly from the data. The proposed architecture considers the temporal dynamics within and across modalities, providing an appealing and practical fusion scheme. We evaluate the proposed approach on a large audiovisual corpus (over 60 hours), which contains different channel and environmental conditions, comparing the results with competitive single task learning (STL) and MTL baselines. Although our main goal is to improve the performance of our ASR task, the experimental results show that the proposed approach can achieve the best performance across all conditions for both speech tasks. In addition to state-of-the-art performance in AV-ASR, the proposed solution can also provide valuable information about speech activity, solving two of the most important tasks in speech-based applications.

SPS on Twitter

  • The SPACE Webinar Series continues this Tuesday, 20 April at 10:00 AM EDT! Join Dr. Ori Katz for "Imaging with Scat… https://t.co/LvVnDcZRui
  • The 2021 IEEE International Symposium on Biomedical Imaging virtual platform is live, featuring pre-recorded talks… https://t.co/JfRAvO5hqr
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now accepting papers for a Special Iss… https://t.co/fQ25UHWidg
  • DEADLINE EXTENDED: The IEEE Journal of Selected Topics in Signal Processing is now accepting submissions for a Spec… https://t.co/AuMC67sUKd
  • The SPACE Webinar Series continues Tuesday, 6 April at 10:00 AM EDT when Dr. Ivan Dokmanić presents "Learning the G… https://t.co/4coVRWm0lc

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar