1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Target source extractionis significant for improving human speech intelligibility and the speech recognition performance of computers. This study describes a method for target source extraction, called the similarity-and-independence-awarebeamformer (SIBF). The SIBF extracts the target source using a rough magnitude spectrogram as the reference signal. The advantage of the SIBF is that it can obtain a more accurate signal than the spectrogram generated by target-enhancing methods such as speech enhancement based on deep neural networks. For the extraction, we extend the framework of deflationary independent component analysis (ICA) by considering the similarities between the reference and extracted target sources, in addition to the mutual independence of all the potential sources. To solve the extraction problem by maximum-likelihood estimation, we introduce three source models that can reflect the similarities. The major contributions of this study are as follows. First, the extraction performance is improved using two methods, namely boost start for faster convergence and iterative casting for generating a more accurate reference. The effectiveness of these methods is verified through experiments using the CHiME3 dataset. Second, a concept of a fixed point pertaining to accuracy is developed. This concept facilitates understanding the relationship between the reference and SIBF output in terms of accuracy. Third, a unified formulation of the SIBF and mask-based beamformer is realized to apply the expertise of conventional BFs to the SIBF. The findings of this study can also improve the performance of the SIBF and promote research on ICA and conventional beamformers.
The process of extracting the target source from mixtures of multiple sound sources, such as denoising and speech extraction, plays a significant role in improving the speech intelligibility of humans and the automatic speech recognition (ASR) performance of computers [1]. The associated methods are generally classified into nonlinear and linear methods. In the last decade, nonlinear methods have significantly improved due to the development of deep neural networks (DNNs). These methods, referred to as DNN-based speech enhancements (SEs), can generate cleaner speech from a noisy speech [2], [3] and extract an utterance from overlapping speeches of multiple speakers [4], [5]. Meanwhile, linear methods such as the beamformer (BF) are advantageous in avoiding nonlinear distortions such as musical noises and spectral distortions [6], [7].