Following on the success of the bi-annual SLT workshop over the past decade, the IEEE Speech and Language Technical Committee invites proposals to host the 2020 IEEE Workshop on Spoken Language Technology (SLT 2020).
The 32nd Conference on Neural Information Processing Systems took place at the convention center of Montreal, Canada, from Dec. 3 to Dec. 8, 2018. This year, the acronym of the conference changed from NIPS to NeurIPS.
The IEEE Signal Processing Society (SPS) announces the 2023 Class of Distinguished Lecturers and Distinguished Industry Speakers for the term of 1 January 2023 to 31 December 2024. The IEEE SPS Distinguished Lecturer (DL) Program provides a means for Chapters to have access to well-known educators and authors in the fields of signal processing to lecture at Chapter meetings.
Each year, the IEEE Board of Directors confers the grade of Fellow on up to one-tenth of one percent of the voting members. To qualify for consideration, an individual must have been a Member, normally for five years or more, and a Senior Member at the time for nomination to Fellow. The grade of Fellow recognizes unusual distinction in IEEE’s designated fields.
Novel computational signal and image analysis approaches based on feature-rich mathematical/computational frameworks continue to push the limits of the technological envelope, thus providing optimized and efficient solutions.
The papers in this special section focus on self-supervised learning for speech and audio processing. A current trend in the machine learning community is the adoption of self-supervised approaches to pretrain deep networks. Self-supervised learning utilizes proxy-supervised learning tasks (or pretext tasks) - for example, distinguishing parts of the input signal from distractors or reconstructing masked input segments conditioned on unmasked segments—to obtain training data from unlabeled corpora.
Although supervised deep learning has revolutionized speech and audio processing, it has necessitated the building of specialist models for individual tasks and application scenarios. It is likewise difficult to apply this to dialects and languages for which only limited labeled data is available. Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains.