1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
The following is a list of Signal Processing Society's distinguished industry speakers.
Jakob Hoydis (SM) is a Principal Research Scientist at NVIDIA working on the intersection of machine learning and wireless communications. Before joining NVIDIA, he was Member of Technical Staff and later Head of a research department at Nokia Bell Labs (2012-2021), with a short break during which he co-founded the social network SPRAED (2014-2015). He obtained the diploma degree in electrical engineering (2002-2008) from RWTH Aachen University, Germany, and the Ph.D. degree (2009-2012) from Supéléc, France.
Dr. Hoydis was Chair, IEEE Communications Society Emerging Technology Initiative on Machine Learning as well as Editor, IEEE Transactions on Wireless Communications (2019-2021). From 2019-2022, he was Area Editor, IEEE Journal on Selected Areas in Communication Series on Machine Learning in Communications and Networks.
He is recipient of the VTG IDE Johann-Philipp-Reis Prize (2019), the IEEE SEE Glavieux Prize (2019), the IEEE Marconi Prize Paper Award (2018), the IEEE Leonard G. Abraham Prize (2015), the IEEE Wireless Communications and Networking Conference 2014 Best Paper Award, the VDE ITG Förderpreis Award (2013), the Publication Prize of the Supéléc Foundation (2012), the Nokia AI Innovation Award (2018), as well as the Nokia France Top Inventor Awards (2018 and 2019). He is one of the maintainers and core developers of Sionna, a GPU-accelerated open-source link-level simulator for next-generation communication systems.
Dr. Hoydis’ research interests include machine learning, signal processing, and information theory and their applications to wireless communications and related applications.
Jakob Hoydis
NVIDIA, France
E: jhoydis@nvidia.com
Linda J. Moore (SM) received a B.S. in computer engineering (2000-2004) from Wright State University (Dayton, Ohio, USA) and an M.S. in electrical engineering (2004-2006) from The Ohio State University (Columbus, Ohio, USA). She received a Ph.D. in electrical engineering (2006-2016) from the University of Dayton (Dayton, Ohio, USA) where she focused on the impact of phase information on radar automatic target recognition.
Dr. Moore is an IEEE Senior Member (2020), served as a Technical Session Chair at the IEEE Radar Conference, Radar Imaging Systems Session (2014) and the SPIE Defense and Commercial Sensing Conference, Algorithms for SAR Imagery Session (2014, 2017).
Dr. Moore has 19 technical publications including journal articles in IEEE Transactions on Aerospace and Electronic Systems (2018), and IEEE Aerospace and Electronics Systems Magazine (2014). She also contributed content to Part VII: Imaging Radar in Stimson’s Introduction to Airborne Radar book (2014) (acknowledgement to AFRL Gotcha Radar Program).
Dr. Moore has focused on innovative solutions for real-time radar processing to create 24/7, all-weather, day/night sensing capabilities. Dr. Moore has strengthened the workforce through internships, technical/strategic guidance, development of “soft skills” (e.g., communication), promotion of professionalism, and emphasis on participation in world-class technical societies like IEEE. Her exemplary science, technology, engineering and mathematics (STEM) leadership and mentoring was recognized in 2020 when she received the IEEE Dayton Section Women in Engineering (WIE) Award. Dr. Moore has significantly contributed to the engineering community by publishing data sets and challenge problems to reduce the barrier of entry for radar signal processing researchers.
Linda J. Moore
E: linda.moore.10@us.af.mil
Ruhi Sarikaya (F) received his B.S. degree from Bilkent University, Turkey (1990-1995); M.S. degree from Clemson University, USA (1995-1997); and Ph.D. degree from Duke University, USA (1997-2001), all in electrical and computer engineering. He has been a Director at Amazon Alexa since 2016. He built and is leading the Intelligence Decisions organization within Alexa AI at Amazon. With his team, he has been building core AI capabilities around ranking, relevance, natural language understanding, dialog management, contextual understanding, personalization, self-learning, proactive suggestions, metrics and analytics for Alexa. Prior to that, he was a principal science manager and the founder of the language understanding and dialog systems group at Microsoft between (2011 and 2016). His group has built the language understanding and dialog management capabilities of Cortana, Xbox One, and the underlying platform. Before Microsoft, he was a research staff member and team lead in the Human Language Technologies Group at the IBM T.J. Watson Research Center for ten years. Prior to IBM, he worked as a researcher at the Center for Spoken Language Research (CSLR) at University of Colorado at Boulder for two years.
Dr. Sarikaya is IEEE Fellow (2021) and is the recipient of the Best Paper Award: “Convolutional Neural Network Based Triangular CRF for Joint Intent Detection and Slot Filling”, IEEE Automatic Speech Recognition and Understanding Workshop (2013). He was Lead Guest Editor, Special Issue on “Processing morphologically rich languages”, IEEE Transactions on Audio Speech and Language Processing (2009); Associate Editor, IEEE Transactions on Audio Speech and Language Processing (2007-2011); Associate Editor, IEEE Signal Processing Letters (2010-2012); and IEEE Speech and Language Processing Technical Committee (NLP Area) (2015-2017).
Dr. Sarikaya has served as Member, Speech and Language Processing Technical Committee (2015-2017); General Co-Chair, IEEE Spoken Language Technology Workshop (SLT) (2012); Publicity Chair, IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) (2005); Associate Editor, IEEE Transactions on Audio, Speech and Language Processing (2008-2012) and IEEE Signal Processing Letters (2011-2012). He has given keynotes in major AI, Web and language technology conferences. He has published over 130 technical papers in refereed journal and conference proceedings and is the inventor of over 80 issued/pending patents.
Ruhi Sarikaya
Amazon
Bellevue, WA, USA
E: rsarikay@amazon.com
Ivan Tashev (F) received his Diploma Engineer degree in Electronic Engineering in 1984 and PhD in Computer Science in 1990 from the Technical University of Sofia, Bulgaria. He was Assistant Professor in the Department of Electronic Engineering of the same university, when in 1998 joined Microsoft in Redmond, USA. Currently, Dr. Tashev is a Partner Software Architect and leads the Audio and Acoustics Research Group in Microsoft Research Labs in Redmond, USA. Since 2012, Dr. Tashev is Affiliate Professor in the Department of Electrical and Computer Engineering of the University of Washington in Seattle, USA. Since 2019, he is an Honorary Professor at the Technical University of Sofia, Bulgaria.
Dr. Tashev is IEEE Fellow (2021); Member, Audio Engineering Society (2006); Member, Acoustical Society of America (2010); Member, SPS Audio and Acoustics Signal Processing Technical Committee (2011-2014), IEEE SPS Standing Committee on Industry DSP Technology (2013-2020), IEEE SPS Applied Signal Processing Systems Technical Committee (2021), Chair, IEEE SPS Industry Technical Working Group (2020-2022).
Dr. Tashev is listed as inventor of 55 USA patent applications, 50 of them already granted. The audio processing technologies, created by Dr. Tashev, have been incorporated in Microsoft Windows, Microsoft Auto Platform, and Microsoft Round Table device. He served as the audio architect for Kinect for Xbox and for Microsoft HoloLens. His latest passion is Brain-Computer Interfaces.
Dr. Tashev’s research interests include processing multichannel signals with the means of Artificial Intelligence and Machine Learning, especially processing audio and biological signals.
Ivan Tashev
Microsoft
Redmond, WA, USA
E: ivantash@microsoft.com
Yan Ye (SM) received her Ph.D. degree from the University of California, San Diego, in 2002, and her B.S. and M.S. degrees from the University of Science and Technology of China in 1994 and 1997, respectively. She is currently the Head of Video Technology Lab of Alibaba’s Damo Academy, Alibaba Group U.S. in Sunnyvale California. Prior to Alibaba, she held various management and technical positions at InterDigital, Dolby Laboratories, and Qualcomm.
Dr. Ye was Guest Editor, IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) special section on “the joint Call for Proposals on video compression with capability beyond HEVC” (2020) and TCSVT special section on “Versatile Video Coding” (2021). She has been Program Committee Member, IEEE Data Compression Conference (DCC) (since 2014); Conference Subcommittee Co-Chair, IEEE Visual Signal Processing and Communication Technical Committee (VSPC-TC) (since 2022); Area Chair, of “multimedia standards and related research” of the IEEE International Conference on Multimedia and Expo (ICME) (2021); Publicity Chair, IEEE Video Coding and Image Process (VCIP) (2021); Industry Chair, IEEE Picture Coding Symposium (PCS) (2019); Organizing Committee Member, IEEE International Conference on Multimedia and Expo (ICME) (2018); and Technical Program Committee Member, IEEE Picture Coding Symposium (PCS) (2013 and 2019).
Dr. Ye has been actively involved in developing international video coding and video streaming standards in ITU-T SG16/Q.6 Video Coding Experts Group (VCEG) and ISO/IEC JTC 1/SC 29 Moving Picture Experts Group (MPEG). She holds various leadership positions in international and U.S. national standards development organizations, where she is currently an Associate Rapporteur of the ITU-T SG16/Q.6 (since 2022), the Group Chair of INCITS/MPEG task group (since 2020), and a focus group chair of the ISO/IEC SC 29/AG 5 MPEG Visual Quality Assessment (since 2020). She has made many technical contributions to well-known video coding and streaming standards such as H.264/AVC, H.265/HEVC, H.266/VVC, MPEG DASH and MPEG OMAF. She is an Editor of the VVC test model, the 360Lib algorithm description, and the scalable extensions and the screen content coding extensions of the HEVC standard.
Dr. Ye is devoted to multimedia standards development, hardware and software video codec implementations, as well as deep learning-based video research. Her research interests include advanced video coding, processing and streaming algorithms, real-time and immersive video communications, AR/VR/MR, and deep learning-based video coding, processing, and quality assessment algorithms.
Yan Ye
E: yye2009@gmail.com
Jerome R. Bellegarda (F) is Apple Distinguished Scientist in Intelligent System Experience at Apple Inc., Cupertino, California, where he works on multiple user interaction modalities, including speech, handwriting, touch, keyboard, and camera input. Prior to joining Apple in 1994, he was a Research Staff Member at the IBM T.J. Watson Center, Yorktown Heights, New York (1988-1994). He received the Bachelor degree in Mechanical Engineering from the University of Nancy, Nancy, France, in 1983, and MSc and PhD degrees in Electrical Engineering from the University of Rochester, Rochester, New York, in 1984 and 1987, respectively.
Dr. Bellegarda was elected IEEE Fellow (2003) and Fellow of International Speech Communication Association (ISCA) (2013). He has held editorial positions for the IEEE Transactions on Speech and Audio Processing (1999-2004), and Speech Communication (2004-present). He has served on the IEEE Speech and Language Processing Technical Committee (2015-2019), the IEEE Data Science Initiative Steering Committee (2017-2019), and the ISCA Advisory Council (2013-2020). He was Chair of ISCA Fellows Selection Committee (2016-2018), and General or Technical Chair for multiple workshops and conferences, including the Workshop on Hands-free Speech Communication and Microphone Arrays (2017), and the International Conference on Speech Communication and Technology (InterSpeech) (2012).
Dr. Bellegarda received a Best Paper Award from ISCA for his work on adaptive language modeling (2006). He was also nominated by the IEEE SPS Speech and Language Processing Technical Committee for the 2001 IEEE W.R.G. Baker Prize Paper Award and the 2003 IEEE SPS Best Paper Award. Among his diverse contributions to speech and language advances over the years, he pioneered the use of tied mixtures in acoustic modeling and latent semantics in language modeling.
Dr. Bellegarda’s research interests span machine learning applications, statistical modeling algorithms, natural language processing, man-machine communication, multiple input/output modalities, and multimedia knowledge management.
Jerome Bellegarda
E: jerome@ieee.org
Mariya Doneva (M) is a Senior Scientist and a Team Lead at Philips Research, Hamburg, Germany. She received her BSc and MSc degrees in Physics from the University of Oldenburg in 2006 and 2007, respectively and her PhD degree in Physics from the University of Lübeck in 2010. She was a Research Associate at Electrical Engineering and Computer Sciences Department at UC Berkeley between 2015 and 2016. Since 2016, Dr. Doneva is leading the activities on MR Fingerprinting (a novel approach for efficient multi-parametric quantitative imaging) in Philips Research including in house technical development and collaboration with clinical and technical partners.
Dr. Doneva is Organizing Committee Member, International Society for Magnetic Resonance in Medicine (ISMRM) (2019-2021), IEEE International Symposium on Biomedical Imaging (ISBI) (2020), and the ISMRM Workshop on Data Sampling and Image Reconstruction (2020). She was Guest Editor, IEEE Signal Processing Magazine Special Issue on Computational MRI: Compressive Sensing and Beyond; Editor, comprehensive reference book on Quantitative Magnetic Resonance Imaging; Editorial Board Member, Magnetic Resonance in Medicine and IEEE Transactions on Computational Imaging; and Editor of a reference book on MR image reconstruction. She is a recipient of the Junior Fellow Award of the International Society for Magnetic Resonance in Medicine (2011).
Dr. Doneva’s research interests include methods for efficient data acquisition, image reconstruction and quantitative parameter mapping in the context of magnetic resonance imaging. Her work involves developing mathematical optimization and signal processing approaches that aim at improving the MR scan efficiency and obtaining robust and reliable (multi-parametric) quantitative information for diagnostics and therapy follow up.
Mariya Doneva
Philips GmbH Innovative Technologies
Hamgurg, Germany
E: mariya.doneva@philips.com
Leo Grady (M) received the B.Sc. degree in Electrical Engineering at the University of Vermont and a Ph.D. in Cognitive and Neural Systems from Boston University. During his tenure as CEO of Paige, Dr. Grady led the company to become an industry leader, internationally launched several groundbreaking software products and became the first-ever company to receive FDA approval for an AI product in pathology. Dr. Grady is currently CEO in Residence with Breyer Capital.
Prior to joining Paige, Dr. Grady was the SVP of Engineering for HeartFlow, where he led full stack technology and product development efforts for HeartFlow’s cardiovascular diagnostic and treatment planning software while also driving HeartFlow’s IP portfolio. Before HeartFlow, he served in various technology and leadership roles at Siemens healthcare. He is internationally recognized as a technology leader in AI for healthcare. He is the recipient of the Edison Patent Award (2012), for best patent in medical imaging and was inducted as a Fellow in the American Institute for Medical and Biological Engineering (2014).
Dr. Grady was Editorial Board Member, Society for Industrial and Applied Mathematics (SIAM) Journal on Imaging Sciences and Journal of Mathematical Imaging; Area Chair, Medical Image Computing and Computer Assisted Intervention Society (MICCAI) (2012–2016) and Conference on Computer Vision and Pattern Recognition (CVPR) (2013–2014). He has served on grant boards for NIH small business grants and NSF computer vision grants. He is Member of IEEE, MICCAI Society and Tau Beta Pi (engineering honors fraternity). He is Planning Committee Member for MICCAI (2017); Program Committee Member, European Conference on Computer Vision (ECCV); Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR); International Conference on Distributed Smart Cameras; Medical Computer Vision (MCV) on Big Data, Deep Learning and Novel Representations; Interactive Computer Vision; Perceptual Organization for Computer Vision; Structured Models in Computer Vision; Information Theory in Computer Vision and Pattern Recognition.
Leo Grady
Darien, CT, USA
E: leograd@yahoo.com
Le Lu (F) received a MSE (2004) and a PhD degree in May 2007 from the Computer Science Department, Johns Hopkins University (starting September 2001). Before that, he studied pattern recognition and computer vision at National Lab of Pattern Recognition, Chinese Academy of Sciences and Microsoft Research Asia between 1996 and 2001. Dr. Lu was at Siemens Corporate Research and Siemens Medical Solutions (USA) from 2006 until 2013. Starting from January 2013 to October 2017, Dr. Lu served as a staff scientist in the Radiology and Imaging Sciences Department of the National Institutes of Health (NIH) Clinical Center. He was the main technical leader for two of the most-impactful public radiology image dataset releases (NIH ChestXray14, NIH DeepLesion 2018).
In 2017, Dr. Lu then went on to found Nvidia’s medical image analysis group and he held the position of Senior Research Manager until June 2018. He was the Executive Director at PAII Inc., Bethesda Research lab, Maryland, USA. He now leads the global Medical AI R&D efforts at Alibaba's DAMO Academy as a Senior Director.
Dr. Lu won NIH Clinical Center Director Award (2017), NIH Mentor of the Year Award (2015), NIH Clinical Center Best Summer Internship Mentor Award (2013). He won MICCAI (the Annual Conference of Medical Image Computing and Computer-aided Intervention) 2017 Young Scientist Award runner-up, MICCAI 2018 Young Scientist Publication Impact Award, MICCAI 2019 and 2020 Medical Image Analysis Best Paper Award finalist; RSNA (Annual Meeting of Radiology Society North America) 2016 and 2018 Research Trainee awards in Informatics, and AFSUMB (Annual meeting of The Asian Federation of Societies for Ultrasound in Medicine and Biology) 2021 YIA (Young Investigator Award) Sliver Award.
Dr. Lu was elected IEEE Fellow (2021) and MICCAI Society Board Member (MICCAI-Industry Workgroup Chair). He is currently an Associate Editor, IEEE Transactions Pattern Analysis and Machine Intelligence (starting from Sept. 2020) and IEEE Signal Processing Letters (starting from July 2020).
Le Lu
E: tiger.lelu@gmail.com
Andreas Stolcke (F) is a Senior Principal Scientist with Amazon Alexa in Sunnyvale, California. Before joining Amazon, he held senior researcher positions at Microsoft (2011-2019) and at SRI International (1994-2011), and was affiliated with the International Computer Science Institute (ICSI) in Berkeley, California, most recently as an External Fellow. He received a Diplom (Master’s) degree from Technical University Munich (1984-1988) and a PhD in computer science from UC Berkeley (1988-1994) for thesis work on probabilistic parsing and grammar induction.
Dr. Stolcke served as Associate Editor, IEEE Transactions on Audio Speech and Language Processing (2000-2002), co-editor for Computer Speech and Language (2003-2006), and editorial board member, Computational Linguistics (1997-1999). He has organized special sessions and workshops at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) and the Association for Computational Linguistics (ACL) conferences. He served on the IEEE SPS Speech and Language Processing Technical Committee (2013-2019) and is Fellow of both the IEEE (2011) and of the International Speech Communication Association (2013).
Dr. Stolcke has made contributions to machine learning and algorithms for speech and language processing, including to conversational speech recognition, speaker recognition and diarization, and paralinguistic modeling. He developed the entropy-based pruning method for N-gram LMs and designed and open-sourced the widely used SRILM language modeling toolkit. He pioneered several methods for using ASR by-products for speaker recognition and conceived the DOVER algorithm for combining multiple diarization hypotheses.
Dr. Stolcke’s current work is focused on exploiting the full range of speech communication in speech and speaker understanding and making conversational speech agents more natural and contextually aware.
Andreas Stolcke
E: stolcke@icsi.berkeley.edu