IEEE JSTSP Special Issue on Machine Learning for Cognition in Radio Communications and Radar (ML-CR2)
Manuscript Due: July 14, 2017
Publication Date: February 2018
CFP Document
Research Fellow sought to work in the Sigmedia Group in the School of Engineering in Trinity College Dublin, Ireland. Ideal candidate will have profile in video processing that complements our work in audio-visual analysis of speech. The researcher will also be responsible for supporting research in multimodal speech analysis in a number of areas including:
Audio-visual speech recognition
Paralinguistic analysis of speech, particularly affect and engagement
Manuscript Due: July 14, 2017
Publication Date: February 2018
CFP Document
Lecture Date: April 13, 2017
Chapter: Toronto
Chapter Chair: Mehrnaz Shokrollahi
Topic: TBD
Research Fellow sought to work in the Sigmedia Group in the School of Engineering in Trinity College Dublin, Ireland. Ideal candidate will have profile in video processing that complements our work in audio-visual analysis of speech. The researcher will also be responsible for supporting research in multimodal speech analysis in a number of areas including:
Audio-visual speech recognition
Paralinguistic analysis of speech, particularly affect and engagement
Lecture Date: October 18, 2017
Chapter: United Kingdom & Ireland
Chapter Chair: Marwan Al-Akaidi
Topic: Machine Listening: Making Computers that Understand Sound
The 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics will take place October 15–18, 2017 at the Mohonk Mountain House, New Paltz, New York, USA.
Lecture Date: March 30, 2017
Chapter: Malaysia
Chapter Chair: Syed Abu Bakar
Topic: What I wish I knew when I was an entry level engineer
Send cover letter and resume to: careers@sensoryinc.com
Please copy and paste the following on the “SUBJECT” line: IEEE Signal Processing Society - Sr. Acoustic DSP Engineer (JC: 17-1_PC-21 )
Job Code: 17-1_PC-21
Position Title : Sr. Acoustic DSP Engineer
Job Location: Santa Clara, CA USA
How would you feel if electronic devices could recognize your emotion and take actions based on it? They could cheer you up with a joke when you are sad. They’d recognize sleepiness while you were driving, and help you understand if a person was in real pain or just claiming to be. They could differentiate the Duchenne smile from the forced one or detect depression using facial muscle movements. These applications aren’t promises of the future: they’re possible today with recent developments in signal processing and machine learning algorithms.