1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Post-doc position in speech processing research at SUTD, Singapore Project Teaching and Learning English Pronunciation by Generating the Vocal Tract Shapes from the Frequency Domain Information
(This is a new project funded by the very competitive MOE Academies Fund (MAF) for 24 months. PI: Dr. Simon Lui, SUTD)
Description: The purpose of this project is to develop an interactive app for both mobile devices and the Internet that helps learners improve their pronunciation through the provision of a real time visual representation of their vocal tract shapes. Through such feedback, it is envisaged that learners will be able to learn how to modify their tongue position, lip shape and height of their tongue in the oral cavity to “match” the vocal tract shape of the normed vocal pattern of that particular sound. For example, learners articulating the high front unrounded vowel /e/ will be able to see their vocal tract shape against an outline or indicators that present the desired or targeted vocal tract shape. Such normed or targeted patterns, for the purposes of the project, will be assumed to be the standard British English, which is the generally accepted norm for Singapore schools.
Skill requirement 1. Audio information retrieval (e.g. MFCC, LPC, FFT, formant analysis) 2. Machine learning (e.g. (Deep) Neural network, Bayesian network, SVM, HMM, PCA) 3. Experience on speech research (e.g. vocal tract area function, pronunciation visualization) or audio research (Optional) experience in mobile web app programming Duty 1. Human voice to vocal tract shape conversion research. 2. Data mining and machine learning model construction of speech and vocal tract shape features. 3. Assist in developing an educational app (with an audio app developer). 4. Subject test (work with a local English teacher) More information at: http://www.simonlui.com/job/
About The Singapore University of Technology and Design (SUTD) and the SUTD Audio Research Group (ARG): Established in collaboration with MIT, The SUTD ARG is a brand new group formed by young and energetic faculties/researchers with strong passion on novel and pioneer audio research. We are in collaboration with Massachusetts Institute of Technology (MIT), Nanyang Technological University, Singapore (NTU), The National University of Singapore (NUS), Hong Kong University of Science and Technology (HKUST), and Purdue University Calumet. We also have various industry partners including Apple Inc, SingTel, HTC Singapore, Sennheiser, as well as close connection with the music production individuals.
The SUTD ARG is currently working on these projects:
- Teaching and Learning English Pronunciation by Generating the Vocal Tract Shapes from the Frequency Domain Information.
- SYNCBEAT: A personalised music-aid application to enhance auditory-motor synchronization in running.
- Horn detection under noisy road traffic.
- A Real Time Common Chord Progression Guide on the Smartphone.
- A Musical, Mobile System for Affective Neurofeedback.
Application: Please send a letter of interest and a CV to Dr. Simon LUI - simon_lui@sutd.edu.sg
Recruitment will continue until all positions are filled.