Distinguished Industry Speakers

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Distinguished Industry Speakers

Distinguished Industry Speakers Page Image

The following is a list of Signal Processing Society's distinguished industry speakers.



2022 Distinguished Industry Speakers

Jerome R. Bellegarda

Jerome R. Bellegarda (F) is Apple Distinguished Scientist in Intelligent System Experience at Apple Inc., Cupertino, California, where he works on multiple user interaction modalities, including speech, handwriting, touch, keyboard, and camera input. Prior to joining Apple in 1994, he was a Research Staff Member at the IBM T.J. Watson Center, Yorktown Heights, New York (1988-1994). He received the Bachelor degree in Mechanical Engineering from the University of Nancy, Nancy, France, in 1983, and MSc and PhD degrees in Electrical Engineering from the University of Rochester, Rochester, New York, in 1984 and 1987, respectively.

Dr. Bellegarda was elected IEEE Fellow (2003) and Fellow of International Speech Communication Association (ISCA) (2013). He has held editorial positions for the IEEE Transactions on Speech and Audio Processing (1999-2004), and Speech Communication (2004-present). He has served on the IEEE Speech and Language Processing Technical Committee (2015-2019), the IEEE Data Science Initiative Steering Committee (2017-2019), and the ISCA Advisory Council (2013-2020). He was Chair of ISCA Fellows Selection Committee (2016-2018), and General or Technical Chair for multiple workshops and conferences, including the Workshop on Hands-free Speech Communication and Microphone Arrays (2017), and the International Conference on Speech Communication and Technology (InterSpeech) (2012).

Dr. Bellegarda received a Best Paper Award from ISCA for his work on adaptive language modeling (2006). He was also nominated by the IEEE SPS Speech and Language Processing Technical Committee for the 2001 IEEE W.R.G. Baker Prize Paper Award and the 2003 IEEE SPS Best Paper Award. Among his diverse contributions to speech and language advances over the years, he pioneered the use of tied mixtures in acoustic modeling and latent semantics in language modeling.

Dr. Bellegarda’s research interests span machine learning applications, statistical modeling algorithms, natural language processing, man-machine communication, multiple input/output modalities, and multimedia knowledge management.

Jerome Bellegarda
E: jerome@ieee.org

Lecture Topics

  • Input Intelligence on Mobile Devices
  • Natural Language Interaction for Personal Assistance
  • Data Diversity via Synthetic Data Generation

Mariya Doneva

Mariya Doneva (M) is a Senior Scientist and a Team Lead at Philips Research, Hamburg, Germany. She received her BSc and MSc degrees in Physics from the University of Oldenburg in 2006 and 2007, respectively and her PhD degree in Physics from the University of Lübeck in 2010. She was a Research Associate at Electrical Engineering and Computer Sciences Department at UC Berkeley between 2015 and 2016. Since 2016, Dr. Doneva is leading the activities on MR Fingerprinting (a novel approach for efficient multi-parametric quantitative imaging) in Philips Research including in house technical development and collaboration with clinical and technical partners.

Dr. Doneva is Organizing Committee Member, International Society for Magnetic Resonance in Medicine (ISMRM) (2019-2021), IEEE International Symposium on Biomedical Imaging (ISBI) (2020), and the ISMRM Workshop on Data Sampling and Image Reconstruction (2020). She was Guest Editor, IEEE Signal Processing Magazine Special Issue on Computational MRI: Compressive Sensing and Beyond; Editor, comprehensive reference book on Quantitative Magnetic Resonance Imaging; Editorial Board Member, Magnetic Resonance in Medicine and IEEE Transactions on Computational Imaging; and Editor of a reference book on MR image reconstruction. She is a recipient of the Junior Fellow Award of the International Society for Magnetic Resonance in Medicine (2011).

Dr. Doneva’s research interests include methods for efficient data acquisition, image reconstruction and quantitative parameter mapping in the context of magnetic resonance imaging. Her work involves developing mathematical optimization and signal processing approaches that aim at improving the MR scan efficiency and obtaining robust and reliable (multi-parametric) quantitative information for diagnostics and therapy follow up.

Mariya Doneva
Philips GmbH Innovative Technologies
Hamgurg, Germany
E: mariya.doneva@philips.com

Lecture Topics

  • MR Image Reconstruction as a Computational Imaging Problem: From Model-Based Reconstruction and Sparsity to Machine Learning
  • Efficient Quantitative MR Imaging: MR Fingerprinting and Beyond
  • The Path of Medical Imaging Innovations: From Early Ideas to Product and Clinical Adoption

Leo Grady

Leo Grady (M) received the B.Sc. degree in Electrical Engineering at the University of Vermont and a Ph.D. in Cognitive and Neural Systems from Boston University. During his tenure as CEO of Paige, Dr. Grady led the company to become an industry leader, internationally launched several groundbreaking software products and became the first-ever company to receive FDA approval for an AI product in pathology. Dr. Grady is currently CEO in Residence with Breyer Capital.

Prior to joining Paige, Dr. Grady was the SVP of Engineering for HeartFlow, where he led full stack technology and product development efforts for HeartFlow’s cardiovascular diagnostic and treatment planning software while also driving HeartFlow’s IP portfolio. Before HeartFlow, he served in various technology and leadership roles at Siemens healthcare. He is internationally recognized as a technology leader in AI for healthcare. He is the recipient of the Edison Patent Award (2012), for best patent in medical imaging and was inducted as a Fellow in the American Institute for Medical and Biological Engineering (2014).

Dr. Grady was Editorial Board Member, Society for Industrial and Applied Mathematics (SIAM) Journal on Imaging Sciences and Journal of Mathematical Imaging; Area Chair, Medical Image Computing and Computer Assisted Intervention Society (MICCAI) (2012–2016) and Conference on Computer Vision and Pattern Recognition (CVPR) (2013–2014). He has served on grant boards for NIH small business grants and NSF computer vision grants. He is Member of IEEE, MICCAI Society and Tau Beta Pi (engineering honors fraternity). He is Planning Committee Member for MICCAI (2017); Program Committee Member, European Conference on Computer Vision (ECCV); Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR); International Conference on Distributed Smart Cameras; Medical Computer Vision (MCV) on Big Data, Deep Learning and Novel Representations; Interactive Computer Vision; Perceptual Organization for Computer Vision; Structured Models in Computer Vision; Information Theory in Computer Vision and Pattern Recognition.

Leo Grady
Darien, CT, USA
E: leograd@yahoo.com

Lecture Topics

  • AI and Computer Vision
  • Healthcare, Software as a Medical Device and Biotech
  • Entrepreneurship and Startups
  • Translational Research
  • Patents

Le Lu

Le Lu (F) received a MSE (2004) and a PhD degree in May 2007 from the Computer Science Department, Johns Hopkins University (starting September 2001). Before that, he studied pattern recognition and computer vision at National Lab of Pattern Recognition, Chinese Academy of Sciences and Microsoft Research Asia between 1996 and 2001. Dr. Lu was at Siemens Corporate Research and Siemens Medical Solutions (USA) from 2006 until 2013. Starting from January 2013 to October 2017, Dr. Lu served as a staff scientist in the Radiology and Imaging Sciences Department of the National Institutes of Health (NIH) Clinical Center. He was the main technical leader for two of the most-impactful public radiology image dataset releases (NIH ChestXray14, NIH DeepLesion 2018).

In 2017, Dr. Lu then went on to found Nvidia’s medical image analysis group and he held the position of Senior Research Manager until June 2018. He was the Executive Director at PAII Inc., Bethesda Research lab, Maryland, USA. He now leads the global Medical AI R&D efforts at Alibaba's DAMO Academy as a Senior Director.

Dr. Lu won NIH Clinical Center Director Award (2017), NIH Mentor of the Year Award (2015), NIH Clinical Center Best Summer Internship Mentor Award (2013). He won MICCAI (the Annual Conference of Medical Image Computing and Computer-aided Intervention) 2017 Young Scientist Award runner-up, MICCAI 2018 Young Scientist Publication Impact Award, MICCAI 2019 and 2020 Medical Image Analysis Best Paper Award finalist; RSNA (Annual Meeting of Radiology Society North America) 2016 and 2018 Research Trainee awards in Informatics, and AFSUMB (Annual meeting of The Asian Federation of Societies for Ultrasound in Medicine and Biology) 2021 YIA (Young Investigator Award) Sliver Award.

Dr. Lu was elected IEEE Fellow (2021) and MICCAI Society Board Member (MICCAI-Industry Workgroup Chair). He is currently an Associate Editor, IEEE Transactions Pattern Analysis and Machine Intelligence (starting from Sept. 2020) and IEEE Signal Processing Letters (starting from July 2020).

Le Lu
E: tiger.lelu@gmail.com

Lecture Topics

  • Facing the Global Health Challenges in Population Health
  • Oncology Via Scalable AI Tools

Andreas Stolcke

Andreas Stolcke (F) is a Senior Principal Scientist with Amazon Alexa in Sunnyvale, California. Before joining Amazon, he held senior researcher positions at Microsoft (2011-2019) and at SRI International (1994-2011), and was affiliated with the International Computer Science Institute (ICSI) in Berkeley, California, most recently as an External Fellow. He received a Diplom (Master’s) degree from Technical University Munich (1984-1988) and a PhD in computer science from UC Berkeley (1988-1994) for thesis work on probabilistic parsing and grammar induction.

Dr. Stolcke served as Associate Editor, IEEE Transactions on Audio Speech and Language Processing (2000-2002), co-editor for Computer Speech and Language (2003-2006), and editorial board member, Computational Linguistics (1997-1999). He has organized special sessions and workshops at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) and the Association for Computational Linguistics (ACL) conferences. He served on the IEEE SPS Speech and Language Processing Technical Committee (2013-2019) and is Fellow of both the IEEE (2011) and of the International Speech Communication Association (2013).

Dr. Stolcke has made contributions to machine learning and algorithms for speech and language processing, including to conversational speech recognition, speaker recognition and diarization, and paralinguistic modeling. He developed the entropy-based pruning method for N-gram LMs and designed and open-sourced the widely used SRILM language modeling toolkit. He pioneered several methods for using ASR by-products for speaker recognition and conceived the DOVER algorithm for combining multiple diarization hypotheses.

Dr. Stolcke’s current work is focused on exploiting the full range of speech communication in speech and speaker understanding and making conversational speech agents more natural and contextually aware.

Andreas Stolcke
E: stolcke@icsi.berkeley.edu

Lecture Topics

  • Speech Recognition and Understanding for Conversations and Meetings
  • Speech Technology for Advanced Conversational Agents
  • Recent Advances in Speaker Recognition and Diarization

2021 Distinguished Industry Speakers

Achintya K. Bhowmik

Achintya K. (Achin) Bhowmik (F) is the Chief Technology Officer and Executive Vice President of engineering at Starkey Hearing Technologies, a privately-held medical devices business with over 5,000 employees and operations in over 100 countries worldwide. In this role, he is responsible for the company’s technology strategy, global research, product development and engineering departments, and leading the drive to transform hearing aids into multifunction wearable health and communication devices with advanced sensors and artificial intelligence technologies.

Prior to joining Starkey, Dr. Bhowmik was Vice President and General Manager of the Perceptual Computing Group at Intel Corporation. There, he was responsible for the R&D, engineering, operations, and businesses in the areas of 3D sensing and interactive computing, computer vision and artificial intelligence, autonomous robots and drones, and immersive virtual and merged reality devices.

Dr. Bhowmik is an Adjunct Professor at Stanford University School of Medicine, where he advises research and lectures in the areas of multisensory cognition, perceptual augmentation, and intelligent systems. He has also held adjunct and guest professor positions at the University of California, Berkeley, Liquid Crystal Institute of the Kent State University, Kyung Hee University, Seoul, and the Indian Institute of Technology, Gandhinagar. He received his B.Tech. degree (1996) from the Indian Institute of Technology, Kanpur, and Ph.D. degree (2000) from Auburn University. He has authored over 200 publications, including two books and 38 issued patents.

Dr. Bhowmik serves on the board of trustees for the National Captioning Institute, board of directors for OpenCV, executive board for the Society for Information Display (SID) as the president-elect, and board of advisors for the Fung Institute for Engineering Leadership at UC Berkeley. He is on the board of directors and advisors for several technology startup companies. His awards and honors include TIME’s Best Inventions, Artificial Intelligence Breakthrough Award, Red Dot Design Award, Industrial Distinguished Leader Award from the Asia-Pacific Signal and Information Processing Association, and a Fellow of IEEE and SID.

Dr. Bhowmik and his work have been covered in numerous press articles, including TIME, Fortune, Wired, USA Today, US News & World Reports, Wall Street Journal, Bloomberg Businessweek, CBS News, Forbes, Scientific American, Popular Mechanics, MIT Technology Review, SlashGear, Tom’s Hardware, Business Insider, EE Times, CNET, Computerworld, Gizmodo, The Verge, Digital Trends, etc.

Achin Bhowmik, PhD
Stanford University School of Medicine
Cupertino, CA, USA
E: achintya.k.bhowmik@gmail.com

Lecture Topics

  • Transforming Hearing Aids into Multipurpose Devices as a Gateway to Health and Information
  • Enhancing and Augmenting Human Perception with Sensors and Artificial Intelligence
  • Evolving Medtech in the Era of Digitalization and Artificial Intelligence
  • Artificial Intelligence: from Pixels and Phonemes to Semantic Understanding and Interactions
  • Virtual and Augmented Reality: Towards Life-Like Immersive Experiences
  • Cognitive Neuroscience: How Do We Sense and Understand the World?
  • Perceptual Computing: Enabling Machines to Sense and Understand the World

Chienchung Chang

Chienchung Chang (M) received the B.S. from National Tsing-Hua University, Hsinchu, Taiwan, in 1982, M.S. and PhD degrees from University of California, San Diego, La Jolla, USA, in 1987 and 1991 respectively, all in electrical and computer engineering. Since 1991, Dr. Chang has been working with Qualcomm for almost 30 years. Currently, Dr. Chang is the Vice President of Engineering at Qualcomm Technologies where he serves as the department head of Multimedia R&D and Standards group with major focus in forward looking researches in the fields of speech, video, imaging, computer vision (CV), AI and XR (VR/AR) technologies. His R&D teams have been heavily engaged with standards in speech, video, imaging, computer vision as well.

Dr. Chang pioneered to introduce video, camera, and display technologies into Qualcomm Snapdragon® products. In 2005, he successfully led QCT’s first multimedia-centric chipset, MSM6550, from design to commercialization. MPEG-4/H.263 video codecs, 4MP CMOS image sensor and GPU were introduced into CDMA/WCDMA handsets for the first time. MSM6550 became the best-selling chipsets in company’s history then. The chipsets not only created watershed moment for wireless smartphone booming, but also laid a solid foundation to foster multimedia technology innovation within Snapdragon platforms up to today.

Dr. Chang was commissioned to start Multimedia R&D and Standards Group focusing on forward-looking multimedia research in speech, video, imaging, CV and XR. Under his leadership, Qualcomm won the ITU-T/MPEG H.265/HEVC, SVC, H.266/VVC, EVC video codec and 3GPP EVS speech codec standard competitions. These codec standards are expected to benefit smartphone, automotive, XR and internet of things (IoT) applications for years to come.

Dr. Chang helped Qualcomm publish 3D depth sensor technology based on structured lights (3DSL) and its own programmable hardware, RICA, in 2017. Together with 3D face authentication (3DFA) and relevant CV technologies, he proactively helped Qualcomm deliver the most competitive biometric solutions for Android handsets to counter iPhone FaceID. Lately, Dr. Chang led XR research, augmented by CV and machine learning research, in rolling out leading edge perception technologies such as 6DOF, 3D reconstruction, hand/object detection and tracking, digital human and split XR. These technologies enhance Qualcomm technology leadership and help it become the world largest VR/AR chipset vendor today.

Dr. Chang’s research interests include speech compression, speech recognition, imaging and video processing, computer vision, pattern recognition and machine learning. Dr. Chang holds 264 patents granted in the US and other countries including speech processing, compression and recognition, echo cancellation, voice user interface, video coding, display processing, and computer vision. He was recognized as Distinguished Alumni of College of EECS, National Tsing Hua University, Taiwan, April 2018.

Chienchung Chang
E: cchang@qti.qualcomm.com

Lecture Topics

  • Unleash Multimedia Technologies on Smartphones, Automotive, XR, IoT
  • Boundless XR
  • Disruptive 3D Sensing

Mérouane Debbah

Mérouane Debbah (F) received the M.Sc. and Ph.D. degrees from the Ecole Normale Supérieure Paris-Saclay, France. He was with Motorola Labs, Saclay, France, from 1999 to 2002, and also with the Vienna Research Center for Telecommunications, Vienna, Austria, until 2003. From 2003 to 2007, he was an Assistant Professor with the Mobile Communications Department, Institut Eurecom, Sophia Antipolis, France. In 2007, he was appointed Full Professor at CentraleSupelec, Gif-sur-Yvette, France. From 2007 to 2014, he was the Director of the Alcatel-Lucent Chair on Flexible Radio. Since 2014, he has been Vice-President of the Huawei France Research Center. He is jointly the director of the Mathematical and Algorithmic Sciences Lab as well as the director of the Lagrange Mathematical and Computing Research Center.

Dr. Debbah is an IEEE Fellow, a WWRF Fellow, and a Membre émérite SEE. He was a recipient of the ERC Grant MORE (Advanced Mathematical Tools for Complex Network Engineering) (2012 to 2017); recipient of the Mario Boella Award (2005); recipient of the IEEE Glavieux Prize Award (2011), the Qualcomm Innovation Prize Award (2012); and the IEEE Radio Communications Committee Technical Recognition Award (2019). He received more than 20 best paper awards, among which the 2007 IEEE GLOBECOM Best Paper Award, the Wi-Opt 2009 Best Paper Award, the 2010 Newcom++ Best Paper Award, the WUN CogCom Best Paper 2012 and 2013 Award, the 2014 WCNC Best Paper Award, the 2015 ICC Best Paper Award, the 2015 IEEE Communications Society Leonard G. Abraham Prize, the 2015 IEEE Communications Society Fred W. Ellersick Prize, the 2016 IEEE Communications Society Best Tutorial Paper Award, the 2016 European Wireless Best Paper Award, the 2017 EURASIP Best Paper Award, the 2018 IEEE Marconi Prize Paper Award, the 2019 IEEE Communications Society Young Author Best Paper Award and the Valuetools 2007, Valuetools 2008, CrownCom 2009, Valuetools 2012, SAM 2014, and 2017 IEEE Sweden VT-COM-IT Joint Chapter Best Student Paper Awards. He is an Associate Editor-in-Chief, Journal Random Matrix: Theory and Applications; Associate Area Editor and Senior Area Editor, IEEE Transactions on Signal Processing from 2011 to 2013 and from 2013 to 2014, respectively.

Dr. Debbah’s research interests lie in fundamental mathematics, algorithms, statistics, information, and communication sciences research.

Mérouane Debbah
Director of the Huawei Mathematical and Algorithmic Sciences Lab
Arcs de Seine
Boulogne-Billancourt, France
E: merouane.debbah@huawei.com

Lecture Topics

  • Wireless AI: From Cloud AI to on-device AI
  • Rebuilding the theoretical Foundations of Communication and Computing
  • Fundamentals of 5G
  • Random Matrix Theory/ Theory and Applications
  • An outlook on beyond 5G

Dilek Hakkani-Tür

Dilek Hakkani-Tür (F) is a senior principal scientist at Amazon Alexa AI and a Visiting Distinguished Professor at UC Santa Cruz, focusing on enabling natural dialogues with machines. Prior to joining Amazon, she was leading the dialogue research group at Google (2016-2018), a principal researcher at Microsoft Research (2010-2016), International Computer Science Institute (ICSI, 2006-2010) and AT&T Labs-Research (2001-2005). She received her BSc degree from Middle East Technical Univ, in 1994, and MSc and PhD degrees from Bilkent University, Department of Computer Engineering, in 1996 and 2000, respectively.

Dr. Hakkani-Tür is the recipient of three best paper awards for her work on active learning for dialogue systems, from IEEE Signal Processing Society (2008), International Symposium on Computer Architecture (ISCA) (2007) and European Association for Signal Processing (EURASIP) (2007). She served as Associate Editor, IEEE Transactions on Audio, Speech and Language Processing (2005-2008), Member, IEEE Speech and Language Processing Technical Committee (2009-2014), Area Editor for speech and language processing for Elsevier's Digital Signal Processing Journal and IEEE Signal Processing Letters (2011-2013), and served on the ISCA Advisory Council (2015-2018). She is Editor-in-Chief, IEEE/ACM Transactions on Audio, Speech and Language Processing (2019-2021), and a Fellow of the IEEE (2014) and ISCA (2014).

Dr. Hakkani-Tür’s research interests include conversational AI, natural language and speech processing, spoken dialogue systems, and machine learning for language processing.

Dilek Hakkani-Tür
E: dilek@ieee.org, dilek.IATASLP@gmail.com

Lecture Topics

  • Conversational Machines: Towards bridging the chasm between task-oriented and social conversations
  • Deep Learning for Task-Oriented Dialogue Systems
  • Neural Network-based Response Generation in Social Conversational Systems

Xiaodong He

Xiaodong He (F) is Vice President of Technology of JD.COM Inc., Deputy Managing Director of JD AI Research, and Head of the Deep learning, NLP and Speech Lab. He is also an Affiliate Professor at the ECE Department of the University of Washington (Seattle). Dr. He joined JD.COM, the largest online retailer in China, in 2018. Prior to that, he was Principal Researcher and Research Manager of the Deep Learning Technology Center (DLTC) at Microsoft Research, Redmond, WA, USA. He holds a Bachelor degree from Tsinghua University (Beijing), a MS degree from Chinese Academy of Sciences (Beijing) and a PhD degree from the University of Missouri – Columbia, MO, USA.

Dr. He is an IEEE Fellow, for contributions to multimodal signal processing in human language and vision technologies, and Fellow of the China Association of Artificial Intelligence (CAAI). He has held editorial positions, Transactions of the Association for Computational Linguistics (T-ACL) and multiple IEEE Journals including IEEE Signal Processing Magazine, IEEE Signal Processing Letters (2017-2018) and served in the organizing committee/ program committee of major artificial intelligence conferences. He was Chair, IEEE Seattle Section (2016-2017), and served in the IEEE Speech and Language Processing Technical Committee (2015-2017).

Dr. He’s work, including Deep Structured Semantic Models (DSSM), Hierarchical Attention Networks (HAN), Bottom-up and Top-down Attention models (BUTD), Stacked Attention Networks (SAN), MS-Celeb-1M, AttnGAN, CaptionBot, DistMult, STAGG-QA, RNN-SLU, is widely applied to important scenarios in natural language processing, computer vision, dialogue systems, multimodal human-machine interaction, IR and knowledge graph. He also led the development of the industry-first emotion-aware conversational system that provides large-scale smart customer services to more than 300 millions of users of JD.COM.

Dr. He has received multiple Best Paper Awards, as follows: IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) (2011), Association for Computational Linguistics (ACL) (2015), and the IEEE/ACM Transactions on Audio, Speech, and Language Processing (2018). Dr. He won the following major AI challenges NIST Machine Translation Evaluation (2008), International Conference on Spoken Language Translation (IWSLT) (2011), COCO Captioning Challenge (2015), Visual Question Answering (VQA) (2017), and WikiHop-QA (2019).

Dr. He’s research interests are mainly in natural language, vision and multimodal intelligence, which is connected to deep learning, natural language processing, speech recognition, information retrieval, computer vision and other relevant fields.

Xiaodong He
E: xiaohe@ieee.org

Lecture Topics

  • The Progress in Vision-and-Language Multimodal Intelligence
  • Language Understanding, Question Answering, and Dialogue, The Evolving Of Language Intelligence
  • Multimodal Conversational AI for Smart Customer Service Systems


SPS on Twitter

  • SPS is proud to participate in IEEE's new Multiple Society Discount Program! Join two or more participating societi… https://t.co/BnwcM7O7iu
  • IEEE Day is October 4th. Celebrate IEEE Day by attending a local event. Visit the IEEE Day site for a complete list… https://t.co/mESJHTn7ek
  • The Biomedical Imaging and Signal Processing Webinar Series continues on Tuesday, 4 October when Selin Aviyente pre… https://t.co/Gl4bHlWbqh
  • On Wednesday, 26 October, join Dr. DeLiang Wang for a new SPS webinar, "Neural Spectrospatial Filter" - register no… https://t.co/vUkiWC4Am8
  • Join Dr. Peilan Wang and Dr Jun Fang for "Channel State Information Acquisition for Intelligent Reflecting Surface-… https://t.co/jOhyA10xuG

SPS Videos

Signal Processing in Home Assistants


Multimedia Forensics

Careers in Signal Processing             


Under the Radar