Multi-Speaker DOA Estimation Using Deep Convolutional Networks Trained With Noise Signals

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Multi-Speaker DOA Estimation Using Deep Convolutional Networks Trained With Noise Signals

By: 
Soumitro Chakrabarty; Emanuël A. P. Habets

Supervised learning-based methods for source localization, being data driven, can be adapted to different acoustic conditions via training and have been shown to be robust to adverse acoustic environments. In this paper, a convolutional neural network (CNN) based supervised learning method for estimating the direction of arrival (DOA) of multiple speakers is proposed. Multi-speaker DOA estimation is formulated as a multi-class multi-label classification problem, where the assignment of each DOA label to the input feature is treated as a separate binary classification problem. The phase component of the short-time Fourier transform (STFT) coefficients of the received microphone signals are directly fed into the CNN, and the features for DOA estimation are learnt during training. Utilizing the assumption of disjoint speaker activity in the STFT domain, a novel method is proposed to train the CNN with synthesized noise signals. Through experimental evaluation with both simulated and measured acoustic impulse responses, the ability of the proposed DOA estimation approach to adapt to unseen acoustic conditions and its robustness to unseen noise type is demonstrated. Through additional empirical investigation, it is also shown that with an array of M microphone our proposed framework yields the best localization performance with M -1 convolution layers. The ability of the proposed method to accurately localize speakers in a dynamic acoustic scenario with varying number of sources is also shown.

SPS on Twitter

  • SPS WEBINAR: Join us on Tuesday, 2 August for a new SPS Webinar, when Dr. Yue Li presents "Learning a Convolutional… https://t.co/Eps90ySYzq
  • Registration for ICIP 2021 is now open! This hybrid event will take place 19-22 September, with the in-person compo… https://t.co/s3kiGP4EPh
  • The Brain Space Initiative Talk Series continues on Friday, 30 July when Dr. Ioulia Kovelman presents "The Bilingua… https://t.co/6EqwqmBD0Q
  • There’s still time to register your team to win the US$5,000 grand prize in the 5-Minute Video Clip Contest, “Autom… https://t.co/76kh4jeL6i
  • Join the SPS Vizag Bay, Long Island, and Finland Chapters for the Seasonal School on Signal Processing and Communic… https://t.co/l04xac8qP5

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar