Multi-Speaker DOA Estimation Using Deep Convolutional Networks Trained With Noise Signals

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Multi-Speaker DOA Estimation Using Deep Convolutional Networks Trained With Noise Signals

Soumitro Chakrabarty; Emanuël A. P. Habets

Supervised learning-based methods for source localization, being data driven, can be adapted to different acoustic conditions via training and have been shown to be robust to adverse acoustic environments. In this paper, a convolutional neural network (CNN) based supervised learning method for estimating the direction of arrival (DOA) of multiple speakers is proposed. Multi-speaker DOA estimation is formulated as a multi-class multi-label classification problem, where the assignment of each DOA label to the input feature is treated as a separate binary classification problem. The phase component of the short-time Fourier transform (STFT) coefficients of the received microphone signals are directly fed into the CNN, and the features for DOA estimation are learnt during training. Utilizing the assumption of disjoint speaker activity in the STFT domain, a novel method is proposed to train the CNN with synthesized noise signals. Through experimental evaluation with both simulated and measured acoustic impulse responses, the ability of the proposed DOA estimation approach to adapt to unseen acoustic conditions and its robustness to unseen noise type is demonstrated. Through additional empirical investigation, it is also shown that with an array of M microphone our proposed framework yields the best localization performance with M -1 convolution layers. The ability of the proposed method to accurately localize speakers in a dynamic acoustic scenario with varying number of sources is also shown.

SPS on Twitter

  • New SPS Webinar! On Friday, 29 October, join Dr. Jérôme Gilles for "Empirical Wavelets," based on his original arti…
  • The Brain Space Initiative Talk Series continues on Friday, 29 October when Dr. Selin Aviyente presents "Cross-Freq…
  • Join the Brain Space Initiative for another virtual mixing event on Wednesday, 27 October! Grab a coffee and meet w…
  • We're proud to sponsor a new journal, IEEE Transactions on Quantum Engineering, publishing regular, review, and tut…
  • We are now seeking mentors and students for the launch of a new initiative, Mentoring Experiences for Underrepresen…

SPS Videos

Signal Processing in Home Assistants


Multimedia Forensics

Careers in Signal Processing             


Under the Radar