An Overview of the IEEE SPS Speech and Language Technical Committee

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

An Overview of the IEEE SPS Speech and Language Technical Committee

By: 
Michiel Bacchiani; Eric Fosler-Lussier
As part of the IEEE Signal Processing Society (SPS), the Speech and Language Technical Committee (SLTC) promotes research and development activities for technologies that are used to process speech and natural language.
 
Much of the SLTC’s efforts are devoted to the annual IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), where the SLTC manages the review of papers covering speech and language and organizes conference sessions, special sessions, and tutorials. In addition, it promotes and supports various workshops, most prominently the Automatic Speech Recognition and Understanding (ASRU) and the Spoken Language Technology (SLT) workshops.
 
The ASRU and SLT workshops are both held biannually in alternating years. This year, the SLT workshop will be held in Athens, Greece [1]. The SLTC recently conducted a search for the venue for ASRU 2019. A four-member workshop subcommittee disseminated calls for proposals as well as used personal connections to reach out and find strong groups that could potentially organize the workshops in diverse geographic locations. The subcommittee supported the prospective proposers in ensuring they constructed a viable, well-planned proposal. The SLTC successfully gathered very strong collaborative proposals from three lead institutions: the Qatar Computing Research (for Doha, Qatar), the National University of Singapore (for Sentosa Island, Singapore), and Friedrich-Alexander-Universität (for Cartagena, Colombia). The SLTC selected Singapore for ASRU 2019 in a close vote (all proposals were within four percentage points), demonstrating the significant influence of the proposers as well as the process to recruit and vet the workshops. We look forward to a successful ASRU 2019 in Singapore!
 
Our speech and language community is strong and growing. Our 2017 annual election had 54 candidates for 19 positions, with 12 first-time members being elected. Our vice-chair election also had four candidates. ICASSP 2018 submissions in our speech and language area were up as well, with a 40% increase in submissions (to 634) during 2017, representing a 22% share of papers presented at the conference.Our work also received significant appreciation as reflected by IEEE SPS awards including the Society Award earned by Alex Acero and the Meritorious Service Award earned by Mari Ostendorf.
 
The most recent decade has witnessed language technologies receiving wide acceptance by the general public. Speech recognition interfaces to smartphones and smart speakers that The most recent decade has witnessed language technologies receiving wide acceptance by the general public. Speech recognition interfaces to smartphones and smart speakers that provide question-and-answer or dialog technologies are becoming unremarkable in the eyes of consumers. As a result, the amount of available data from those interactions is growing very rapidly. This increase of data leads to a virtuous cycle where more data allows for larger and/or better-trained state-of-the-art neural network models. The systems’ improved performance, in turn, gives an incentive for users to engage more with this technology. As a result, our community has become increasingly entangled with the more fundamental research in machine lear ning, and the recent maturing of SLT presents itself as a means to widen our community scope. Some exciting directions in that area include leveraging neural machine learning with multiple objectives to transfer systems that work well in one condition to another space (e.g., robustness to unique noise conditions). We are also starting to see work that ties modalities together, such as learning techniques that map speech and language utterances to visual inputs (pictures, movies). This cross-modal integration will be an important direction for the future of our technical committee and a point of contact across technical committees.

SPS on Facebook

SPS on Twitter

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar