The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
News and Resources for Members of the IEEE Signal Processing Society
The Spring 2013 edition of the IEEE Speech and Language Processing Technical Committee’s Newsletter is now online. It includes a number of announcements from the current and past TC chairs, as well as a number of articles collated by the editorial boarded. Subscribe to the newsletter to be automatically notified of the new editions. We believe the newsletter is an ideal forum for updates, reports, announcements and editorials, and encourage interested individuals to send us their contributions.
Dilek Hakkani-Tür, Editor-in-chief
William Campbell, Editor
Haizhou Li, Editor
Patrick Nguyen, Editor
Advances over the last decade in speech recognition and NLP have fueled the widespread use of spoken dialog systems, including telephony-based applications, multimodal voice search, and voice-enabled smartphone services designed to serve as mobile personal assistants. Key limitations of the systems fielded to date frame opportunities for new research on physically situated and open-world spoken dialog and interaction. Such opportunities are made especially salient for such goals as supporting efficient communication at a distance with Xbox applications and avatars, collaborating with robots in a public space, and enlisting assistance from in-car information systems while driving a vehicle.
The 26th annual Conference on Neural Information Processing Systems (NIPS) took place in Lake Tahoe, Nevada, December 2012. The NIPS conference covers a wide variety of research topics discussing synthetic neural systems through machine learning and artificial intelligence algorithms as well as the analysis of natural neural processing systems. This article is a summary of selected talks regarding recent developments on neural networks and deep learning algorithms presented in NIPS 2012.
Researchers at Carnegie Mellon University’s Silicon Valley Campus and Honda Research Institute have brought together many of today’s visual and audio technologies to build a cutting-edge in-car interface. Ian Lane, Research Assistant Professor at CMU Silicon Valley, and Antoine Raux, Senior Scientist at Honda Research Institute, spoke to us regarding the latest news surrounding AIDAS: An Intelligent Driver Assistive System.
In this article we describe the "Spoken Web Search" task within Mediaeval, which tries to foster research on language-independent search of "real-world" speech data, with a special emphasis on low-resourced languages. In addition, we review the main approaches proposed in 2012 and make a call for participation for the 2013 evaluation.
My voice tells who I am. No two individuals sound identical because their vocal tract shapes and other parts of their voice production organs are different. With speaker verification technology, we extract speaker traits, or voiceprint, from speech samples to establish speaker's identity. Among different forms of biometrics, voice is believed to be the most straightforward for telephone-based applications because telephone is built for voice communication. The recent release of Baidu-Lenovo A586 marks an important milestone of mass market adoption of speaker verification technology in mobile applications. The voice-unlock featured in the smartphone allows users to unlock their phone screens using spoken passphrases.
Cleft Lip and Palate (CLP) is among the most frequent congenital abnormalities; the facial development is abnormal during gestation. This leads to insufficient closure of lip, palate and jaw with affected articulation. Due to the huge variety of malformations speech production is affected inhomogeneously for different patients. Previous research in our group focused mostly on text-wide scores like speech intelligibility. In current projects we focus on a more detailed automatic analysis. The goal is to provide an in-depth diagnosis with direct feedback on articulation deficits.
This article gives a brief overview of the 8th International Symposium on Chinese Spoken Language Processing (ISCSLP), that was held in Hong Kong during 5-8 December 2012. ISCSLP is a major scientific conference for scientists, researchers, and practitioners to report and discuss the latest progress in all theoretical and technological aspects of Chinese spoken language processing. The working language of ISCSLP is English.
Nomination/Position | Deadline |
---|---|
Call for Proposals: 2025 Cycle 1 Seasonal Schools & Member Driven Initiatives in Signal Processing | 17 November 2024 |
Call for Nominations: IEEE Technical Field Awards | 15 January 2025 |
Nominate an IEEE Fellow Today! | 7 February 2025 |
Call for Nominations for IEEE SPS Editors-in-Chief | 10 February 2025 |
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.