JSTSP Featured Articles

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

JSTSP Featured Articles

We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real-life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder-based neural network architecture.

Given the recent surge in developments of deep learning, this paper provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross fertilization between areas.

The IEEE Signal Processing Society congratulates the following recipients who will receive the 2018 IEEE Signal Processing Society Paper Award for their papers published in the IEEE Journal of Selected Topics in Signal Processing. Presentation of the paper awards will take place at ICASSP 2019 in Brighton, UK.

Matched field processing (MFP) compares the measures to the modeled pressure fields received at an array of sensors to localize a source in an ocean waveguide. Typically, there are only a few sources when compared to the number of candidate source locations or range-depth cells. We use sparse Bayesian learning (SBL) to learn a common sparsity profile corresponding to the location of present sources. SBL performance is compared to traditional processing in simulations and using experimental ocean acoustic data.

Supervised learning-based methods for source localization, being data driven, can be adapted to different acoustic conditions via training and have been shown to be robust to adverse acoustic environments. In this paper, a convolutional neural network (CNN) based supervised learning method for estimating the direction of arrival (DOA) of multiple speakers is proposed. Multi-speaker DOA estimation is formulated as a multi-class multi-label classification problem, where the assignment of each DOA label to the input feature is treated as a separate binary classification problem.

This paper investigates sound-field modeling in a realistic reverberant setting. Starting from a few point-like microphone measurements, the goal is to estimate the direct source field within a whole three-dimensional (3-D) space around these microphones. Previous sparse sound field decompositions assumed only a spatial sparsity of the source distribution, but could generally not handle reverberation.

Acoustic source localization and tracking is a well-studied topic in signal processing, but most traditional methods incorporate simplifying assumptions such as a point source, free-field propagation of the sound wave, static acoustic sources, time-invariant sensor constellations, and simple noise fields.

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel