Distinguished Industry Speakers

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Distinguished Industry Speakers

Distinguished Industry Speakers Page Image

The following is a list of Signal Processing Society's distinguished industry speakers.



2021 Distinguished Industry Speakers

Achintya K. Bhowmik

Achintya K. (Achin) Bhowmik (SM) is the Chief Technology Officer and Executive Vice President of engineering at Starkey Hearing Technologies, a privately-held medical devices business with over 5,000 employees and operations in over 100 countries worldwide. In this role, he is responsible for the company’s technology strategy, global research, product development and engineering departments, and leading the drive to transform hearing aids into multifunction wearable health and communication devices with advanced sensors and artificial intelligence technologies.

Prior to joining Starkey, Dr. Bhowmik was Vice President and General Manager of the Perceptual Computing Group at Intel Corporation. There, he was responsible for the R&D, engineering, operations, and businesses in the areas of 3D sensing and interactive computing, computer vision and artificial intelligence, autonomous robots and drones, and immersive virtual and merged reality devices.

Dr. Bhowmik is an Adjunct Professor at Stanford University School of Medicine, where he advises research and lectures in the areas of multisensory cognition, perceptual augmentation, and intelligent systems. He has also held adjunct and guest professor positions at the University of California, Berkeley, Liquid Crystal Institute of the Kent State University, Kyung Hee University, Seoul, and the Indian Institute of Technology, Gandhinagar. He received his B.Tech. degree (1996) from the Indian Institute of Technology, Kanpur, and Ph.D. degree (2000) from Auburn University. He has authored over 200 publications, including two books and 38 issued patents.

Dr. Bhowmik serves on the board of trustees for the National Captioning Institute, board of directors for OpenCV, executive board for the Society for Information Display (SID) as the president-elect, and board of advisors for the Fung Institute for Engineering Leadership at UC Berkeley. He is on the board of directors and advisors for several technology startup companies. His awards and honors include TIME’s Best Inventions, Artificial Intelligence Breakthrough Award, Red Dot Design Award, Industrial Distinguished Leader Award from the Asia-Pacific Signal and Information Processing Association, and the Fellow of SID.

Dr. Bhowmik and his work have been covered in numerous press articles, including TIME, Fortune, Wired, USA Today, US News & World Reports, Wall Street Journal, Bloomberg Businessweek, CBS News, Forbes, Scientific American, Popular Mechanics, MIT Technology Review, SlashGear, Tom’s Hardware, Business Insider, EE Times, CNET, Computerworld, Gizmodo, The Verge, Digital Trends, etc.

Achin Bhowmik, PhD
Stanford University School of Medicine
Cupertino, CA, USA
E: achintya.k.bhowmik@gmail.com

Lecture Topics

  • Transforming Hearing Aids into Multipurpose Devices as a Gateway to Health and Information
  • Enhancing and Augmenting Human Perception with Sensors and Artificial Intelligence
  • Evolving Medtech in the Era of Digitalization and Artificial Intelligence
  • Artificial Intelligence: from Pixels and Phonemes to Semantic Understanding and Interactions
  • Virtual and Augmented Reality: Towards Life-Like Immersive Experiences
  • Cognitive Neuroscience: How Do We Sense and Understand the World?
  • Perceptual Computing: Enabling Machines to Sense and Understand the World

Chienchung Chang

Chienchung Chang (M) received the B.S. from National Tsing-Hua University, Hsinchu, Taiwan, in 1982, M.S. and PhD degrees from University of California, San Diego, La Jolla, USA, in 1987 and 1991 respectively, all in electrical and computer engineering. Since 1991, Dr. Chang has been working with Qualcomm for almost 30 years. Currently, Dr. Chang is the Vice President of Engineering at Qualcomm Technologies where he serves as the department head of Multimedia R&D and Standards group with major focus in forward looking researches in the fields of speech, video, imaging, computer vision (CV), AI and XR (VR/AR) technologies. His R&D teams have been heavily engaged with standards in speech, video, imaging, computer vision as well.

Dr. Chang pioneered to introduce video, camera, and display technologies into Qualcomm Snapdragon® products. In 2005, he successfully led QCT’s first multimedia-centric chipset, MSM6550, from design to commercialization. MPEG-4/H.263 video codecs, 4MP CMOS image sensor and GPU were introduced into CDMA/WCDMA handsets for the first time. MSM6550 became the best-selling chipsets in company’s history then. The chipsets not only created watershed moment for wireless smartphone booming, but also laid a solid foundation to foster multimedia technology innovation within Snapdragon platforms up to today.

Dr. Chang was commissioned to start Multimedia R&D and Standards Group focusing on forward-looking multimedia research in speech, video, imaging, CV and XR. Under his leadership, Qualcomm won the ITU-T/MPEG H.265/HEVC, SVC, H.266/VVC, EVC video codec and 3GPP EVS speech codec standard competitions. These codec standards are expected to benefit smartphone, automotive, XR and internet of things (IoT) applications for years to come.

Dr. Chang helped Qualcomm publish 3D depth sensor technology based on structured lights (3DSL) and its own programmable hardware, RICA, in 2017. Together with 3D face authentication (3DFA) and relevant CV technologies, he proactively helped Qualcomm deliver the most competitive biometric solutions for Android handsets to counter iPhone FaceID. Lately, Dr. Chang led XR research, augmented by CV and machine learning research, in rolling out leading edge perception technologies such as 6DOF, 3D reconstruction, hand/object detection and tracking, digital human and split XR. These technologies enhance Qualcomm technology leadership and help it become the world largest VR/AR chipset vendor today.

Dr. Chang’s research interests include speech compression, speech recognition, imaging and video processing, computer vision, pattern recognition and machine learning. Dr. Chang holds 264 patents granted in the US and other countries including speech processing, compression and recognition, echo cancellation, voice user interface, video coding, display processing, and computer vision. He was recognized as Distinguished Alumni of College of EECS, National Tsing Hua University, Taiwan, April 2018.

Chienchung Chang
E: cchang@qti.qualcomm.com

Lecture Topics

  • Unleash Multimedia Technologies on Smartphones, Automotive, XR, IoT
  • Boundless XR
  • Disruptive 3D Sensing

Mérouane Debbah

Mérouane Debbah (F) received the M.Sc. and Ph.D. degrees from the Ecole Normale Supérieure Paris-Saclay, France. He was with Motorola Labs, Saclay, France, from 1999 to 2002, and also with the Vienna Research Center for Telecommunications, Vienna, Austria, until 2003. From 2003 to 2007, he was an Assistant Professor with the Mobile Communications Department, Institut Eurecom, Sophia Antipolis, France. In 2007, he was appointed Full Professor at CentraleSupelec, Gif-sur-Yvette, France. From 2007 to 2014, he was the Director of the Alcatel-Lucent Chair on Flexible Radio. Since 2014, he has been Vice-President of the Huawei France Research Center. He is jointly the director of the Mathematical and Algorithmic Sciences Lab as well as the director of the Lagrange Mathematical and Computing Research Center.

Dr. Debbah is an IEEE Fellow, a WWRF Fellow, and a Membre émérite SEE. He was a recipient of the ERC Grant MORE (Advanced Mathematical Tools for Complex Network Engineering) (2012 to 2017); recipient of the Mario Boella Award (2005); recipient of the IEEE Glavieux Prize Award (2011), the Qualcomm Innovation Prize Award (2012); and the IEEE Radio Communications Committee Technical Recognition Award (2019). He received more than 20 best paper awards, among which the 2007 IEEE GLOBECOM Best Paper Award, the Wi-Opt 2009 Best Paper Award, the 2010 Newcom++ Best Paper Award, the WUN CogCom Best Paper 2012 and 2013 Award, the 2014 WCNC Best Paper Award, the 2015 ICC Best Paper Award, the 2015 IEEE Communications Society Leonard G. Abraham Prize, the 2015 IEEE Communications Society Fred W. Ellersick Prize, the 2016 IEEE Communications Society Best Tutorial Paper Award, the 2016 European Wireless Best Paper Award, the 2017 EURASIP Best Paper Award, the 2018 IEEE Marconi Prize Paper Award, the 2019 IEEE Communications Society Young Author Best Paper Award and the Valuetools 2007, Valuetools 2008, CrownCom 2009, Valuetools 2012, SAM 2014, and 2017 IEEE Sweden VT-COM-IT Joint Chapter Best Student Paper Awards. He is an Associate Editor-in-Chief, Journal Random Matrix: Theory and Applications; Associate Area Editor and Senior Area Editor, IEEE Transactions on Signal Processing from 2011 to 2013 and from 2013 to 2014, respectively.

Dr. Debbah’s research interests lie in fundamental mathematics, algorithms, statistics, information, and communication sciences research.

Mérouane Debbah
Director of the Huawei Mathematical and Algorithmic Sciences Lab
Arcs de Seine
Boulogne-Billancourt, France
E: merouane.debbah@huawei.com

Lecture Topics

  • Wireless AI: From Cloud AI to on-device AI
  • Rebuilding the theoretical Foundations of Communication and Computing
  • Fundamentals of 5G
  • Random Matrix Theory/ Theory and Applications
  • An outlook on beyond 5G

Dilek Hakkani-Tür

Dilek Hakkani-Tür (F) is a senior principal scientist at Amazon Alexa AI and a Visiting Distinguished Professor at UC Santa Cruz, focusing on enabling natural dialogues with machines. Prior to joining Amazon, she was leading the dialogue research group at Google (2016-2018), a principal researcher at Microsoft Research (2010-2016), International Computer Science Institute (ICSI, 2006-2010) and AT&T Labs-Research (2001-2005). She received her BSc degree from Middle East Technical Univ, in 1994, and MSc and PhD degrees from Bilkent University, Department of Computer Engineering, in 1996 and 2000, respectively.

Dr. Hakkani-Tür is the recipient of three best paper awards for her work on active learning for dialogue systems, from IEEE Signal Processing Society (2008), International Symposium on Computer Architecture (ISCA) (2007) and European Association for Signal Processing (EURASIP) (2007). She served as Associate Editor, IEEE Transactions on Audio, Speech and Language Processing (2005-2008), Member, IEEE Speech and Language Processing Technical Committee (2009-2014), Area Editor for speech and language processing for Elsevier's Digital Signal Processing Journal and IEEE Signal Processing Letters (2011-2013), and served on the ISCA Advisory Council (2015-2018). She is Editor-in-Chief, IEEE/ACM Transactions on Audio, Speech and Language Processing (2019-2021), and a Fellow of the IEEE (2014) and ISCA (2014).

Dr. Hakkani-Tür’s research interests include conversational AI, natural language and speech processing, spoken dialogue systems, and machine learning for language processing.

Dilek Hakkani-Tür
E: dilek@ieee.org, dilek.IATASLP@gmail.com

Lecture Topics

  • Conversational Machines: Towards bridging the chasm between task-oriented and social conversations
  • Deep Learning for Task-Oriented Dialogue Systems
  • Neural Network-based Response Generation in Social Conversational Systems

Xiaodong He

Xiaodong He (F) is Vice President of Technology of JD.COM Inc., Deputy Managing Director of JD AI Research, and Head of the Deep learning, NLP and Speech Lab. He is also an Affiliate Professor at the ECE Department of the University of Washington (Seattle). Dr. He joined JD.COM, the largest online retailer in China, in 2018. Prior to that, he was Principal Researcher and Research Manager of the Deep Learning Technology Center (DLTC) at Microsoft Research, Redmond, WA, USA. He holds a Bachelor degree from Tsinghua University (Beijing), a MS degree from Chinese Academy of Sciences (Beijing) and a PhD degree from the University of Missouri – Columbia, MO, USA.

Dr. He is an IEEE Fellow, for contributions to multimodal signal processing in human language and vision technologies, and Fellow of the China Association of Artificial Intelligence (CAAI). He has held editorial positions, Transactions of the Association for Computational Linguistics (T-ACL) and multiple IEEE Journals including IEEE Signal Processing Magazine, IEEE Signal Processing Letters (2017-2018) and served in the organizing committee/ program committee of major artificial intelligence conferences. He was Chair, IEEE Seattle Section (2016-2017), and served in the IEEE Speech and Language Processing Technical Committee (2015-2017).

Dr. He’s work, including Deep Structured Semantic Models (DSSM), Hierarchical Attention Networks (HAN), Bottom-up and Top-down Attention models (BUTD), Stacked Attention Networks (SAN), MS-Celeb-1M, AttnGAN, CaptionBot, DistMult, STAGG-QA, RNN-SLU, is widely applied to important scenarios in natural language processing, computer vision, dialogue systems, multimodal human-machine interaction, IR and knowledge graph. He also led the development of the industry-first emotion-aware conversational system that provides large-scale smart customer services to more than 300 millions of users of JD.COM.

Dr. He has received multiple Best Paper Awards, as follows: IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) (2011), Association for Computational Linguistics (ACL) (2015), and the IEEE/ACM Transactions on Audio, Speech, and Language Processing (2018). Dr. He won the following major AI challenges NIST Machine Translation Evaluation (2008), International Conference on Spoken Language Translation (IWSLT) (2011), COCO Captioning Challenge (2015), Visual Question Answering (VQA) (2017), and WikiHop-QA (2019).

Dr. He’s research interests are mainly in natural language, vision and multimodal intelligence, which is connected to deep learning, natural language processing, speech recognition, information retrieval, computer vision and other relevant fields.

Xiaodong He
E: xiaohe@ieee.org

Lecture Topics

  • The Progress in Vision-and-Language Multimodal Intelligence
  • Language Understanding, Question Answering, and Dialogue, The Evolving Of Language Intelligence
  • Multimodal Conversational AI for Smart Customer Service Systems

2020 Distinguished Industry Speakers

Mehrdad Fatourechi

Dr. Fatourechi (M) is the VP of Engineering of BroadbandTV, a media-tech company that is advancing the world through the creation, distribution, management, and monetization of content. He is currently responsible for managing the research and development (R&D) and IT teams. When he joined BBTV in March 2010, he was initially responsible for managing the research team, and then his role later expanded to lead the entire engineering department. Under his leadership, BBTV's tech team has become one of the leading and most innovative teams in digital video space, building several internal and external products (including VISO Catalyst, VISO Prism, VISO NOVI, and VISO Mine) as well as filing several patents.

Dr. Fatourechi has an in-depth knowledge of digital signal processing, machine learning, and pattern recognition algorithms. He holds a PhD in Electrical Engineering from the University of British Columbia (UBC), and he was nominated by UBC for NSERC’s Doctoral Prize Award. He is an author on more than 30 journal and conference papers with a focus on pattern recognition, machine learning and intelligent algorithms. He previously held positions in the tech/education industry including roles as a research associate and sessional lecturer at UBC, as well as consulting with several companies (INETCO, BC Mining Research, and STC enterprises). He was Co-Chair of the IEEE Signal Processing Chapter in Vancouver.

Mehrdad Fatourechi
BroadbandTV Corp
E: mfatourechi@bbtv.com

Lecture Topics

  • Applications of digital signal processing and machine learning algorithms in the digital video industry
  • Rights management for digital video assets
  • How digital signal processing and machine learning can help with the growth of digital content owners
  • Establishing trust and safety in digital video platforms

Aris Gkoulalas-Divanis

Aris Gkoulalas-Divanis (SM) received the BS degree from the University of Ioannina (1999-2003), the MS from the University of Minnesota (2003-2005), and the Ph.D. (with honors) from the University of Thessaly (2005-2009), all in Computer Science. His PhD dissertation was awarded the Certificate of Recognition and Honourable Mention in the 2009 ACM SIGKDD. He was a Postdoctoral Research Fellow in the Department of Biomedical Informatics at Vanderbilt University, working on privacy for medical data sharing (from March 2009 until February 2010). He joined IBM Research–Zurich (March 2010) and he relocated to IBM Research in Ireland where he led research on data privacy for Smarter cities (in 2012). In this capacity, he received three Invention Achievement Awards and was designated as IBM Master Inventor. Dr. Gkoulalas-Divanis served as Security & Privacy PIC Co-Chair, IBM Research worldwide (2012-2016); as the leader of the European Big Data Value Association (BDVA) Activity Group on data protection and pseudonymization (from May 2015 until December 2016). He joined IBM Watson Health as a Senior Privacy Research Scientist (June 2016) and he became the Technical Lead on Data Protection and Privacy for IBM Watson Health (June 2017).

Dr. Gkoulalas-Divanis is a Member, IEEE Information Forensics and Security Technical Committee (2018-2020), the IEEE SPS Challenges and Data Collections Committee (2018-2020), and elected Vice-Chair, American National Standards Institute (ANSI) Accredited U.S. Technical Advisory Group (TAG) to ISO/PC 317 “Consumer Protection: Privacy-by-Design”. He is a Research Advisory Board Member, International Association of Privacy Professionals (IAPP) with a 2-year term (2019-2020), as well as Member, IEEE Information Forensics and Security (IFS) Technical Directions and the IFS Industry and Government Subcommittees (2019-2020).

Dr. Gkoulalas-Divanis has served as a full-time Research Assistant in both the University of Minnesota and the University of Manchester. He is an Associate Editor, IEEE Transactions on Information Forensics and Security (2016-2020) and the Information Systems Category Editor of ACM Computing Reviews. He is a Senior Member of IEEE, a Professional Member, Association of Computing Machinery (ACM), Society for Industrial and Applied Mathematics (SIAM), American Association for the Advancement of Science (AAAS) and International Association of Privacy Professionals (IAPP), and an Member-at-large, Upsilon Pi Epsilon (UPE) and Sigma Xi.

Dr. Gkoulalas-Divanis’ research interests are in the areas of data privacy, privacy-preserving data mining, privacy and anonymity in location-based services, privacy in medical data sharing, and sensitive knowledge hiding.

Aris Gkoulalas-Divanis
IBM Watson Health
E: gkoulala@us.ibm.com

Lecture Topics

  • Practical data anonymization and data privacy
  • Utility-preserving data de-identification in healthcare
  • Automated identification and blocking of privacy vulnerabilities
  • Utility-preserving sensitive knowledge hiding

Joseph R. Guerci

Dr. Guerci (F) has over 30 years of experience in advanced technology research and development in government, industrial, and academic settings. His government service included a 7-year term with the Defense Advanced Research Projects Agency (DARPA) in which he held the positions of Program Manager, Deputy Office Director, and finally Director of the Special Projects Office (SPO). In these capacities, Dr. Guerci was involved in the inception, research, development, execution, and ultimately transition of next generation multidisciplinary technologies. His advanced radar and electronic warfare (EW) solutions developed while at DARPA are, for example, deployed in the F-22 and F-35 radars. He also created the KASSPER project, the forerunner of cognitive radar and EW and several latter projects at DARPA. He is currently President and CEO of Information Systems Laboratories, Inc.

Dr. Guerci is a graduate of NYU Polytechnic Institute with a Ph.D.E.E and has held adjunct professorships in engineering and applied mathematics at The City University of New York, NYU Polytechnic University, The Cooper Union for Advancement of Art and Science, and Virginia Tech. Additionally, he has held senior engineer and scientist positions in industry and was Chief Technology Officer (CTO) for SAIC’s $2B+/year Research, Development, Test & Evaluation (RDT&E) Group. He is also currently a member of several industrial, academic, and government advisory boards.

Dr. Guerci a Fellow of the IEEE for "Contributions to Advanced Radar Theory and its Embodiment in Real-World Systems". He is an internationally recognized leader in the research and development of next generation sensor and cognitive systems. Dr. Guerci is the recipient of the Warren D. White Award from the Institute of Electrical and Electronics Engineers (IEEE) for "Excellence in Radar Adaptive Processing and Waveform Diversity". He was General Chair, IEEE International Radar Conference (2015) and has been selected to serve as the Radar/EW Series Editor for Artech House publishers. He was recently the recipient of the IEEE Aerospace & Electronics Systems Society (AESS) Outstanding Organizational Leadership Award (2019).

Joseph Guerci
Information Systems Laboratories, Inc.
E: jrguerci@ieee.org

Lecture Topics

  • Advanced radar and cognitive radar

Karthik Nandakumar

Karthik Nandakumar (SM) is a Research Staff Member at IBM Research Singapore. Since 2017, he is a member of the IBM Center for Blockchain Innovation (ICBI) in Singapore and works on technologies at the intersection of blockchain and artificial intelligence. From 2014 to 2016, he has worked on video surveillance projects and developed robust deep learning algorithms for accurate people counting in crowded scenes and traffic event detection. Prior to joining IBM in 2014, he was a Scientist at Institute for Infocomm Research, A*STAR, Singapore. During this time, he has worked on topics such as biometric template security and facial video analytics for customer profiling. He received his B.E. degree (2002) from Anna University, Chennai, India, M.S. degrees in Computer Science (2005) and Statistics (2007), and Ph.D. degree in Computer Science (2008) from Michigan State University, and M.Sc. degree in Management of Technology (2012) from National University of Singapore.

Dr. Nandakumar is Senior Member of IEEE, IEEE Computer Society, and IEEE Signal Processing Society. He is Vice President for Education, IEEE Biometrics Council (2017 - Present). He is Associate Editor, IEEE Transactions on Information Forensics and Security (2015 - Present); Senior Associate Editor (since 2019); Member, IEEE Information Forensics and Security Technical Committee (2017 - Present). He received the IEEE Signal Processing Society Young Author Best Paper Award (2010); 5 Year Highest Impact Award from BTAS 2008 at the IEEE International Conference on Biometrics: Theory, Applications and Systems (2013); Best Poster Award at the IEEE International Conference on Biometrics: Theory, Applications and Systems (2009); Best Scientific Paper Award at the International Conference on Pattern Recognition, Tampa (2008); Outstanding Graduate Student, Department of Computer Science and Engineering, Michigan State University (2008); Fitch H. Beach Outstanding Graduate Research Award (2008); Best Paper Award from the Pattern Recognition Journal (2005).

Dr. Nandakumar’s research interests include machine learning, computer vision, biometric authentication, applied cryptography, and blockchain.

Karthik Nandakumar
IBM Singapore Lab
E: nkarthik@sg.ibm.com

Lecture Topics

  • Biometric template protection: crossing the chasm between theory and practice
  • Fingerprint template security
  • When blockchain meets computer vision: opportunities and challenges
  • Deep neural network training on encrypted data

Akihiko Sugiyama

Akihiko (Ken) Sugiyama (F) is a project researcher at Yahoo! JAPAN Research, Yahoo Japan Corporation, in charge of audio and speech signal processing. He received B. Eng, M. Eng, and D. Eng degrees all in Electrical Engineering from Tokyo Metropolitan University, Tokyo, Japan, in 1979, 1981, and 1998, respectively. From 1981 until 2018, he had been engaged in research and development in telecommunication and signal processing at the Central Research Laboratories, NEC Corporation, Japan. In 1987, he was on leave at the Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada, as a Visiting Scientist. He has been teaching at several universities as a part-time lecturer since 2002. Since 2016, he has been a Visiting Professor at the Faculty of System Design, Tokyo Metropolitan University, Japan.

Dr. Sugiyama served as Associate Editor, IEEE Transactions on Signal Processing (1994 to 1996); Chair, Audio and Acoustic Signal Processing Technical Committee (2011-2012) and Member from 1993-2010; Chair, Japan Chapter of SPS (2009 to 2010); Secretary and Member at Large, IEEE Signal Processing Society’s Conferences Board (2010 to 2011); Member, Technical Directions Board (2011 to 2012); Member, Industry Relations Committee (2011 to 2016); Chair, Chapter Operations Committee, IEEE Japan Council (2015 to 2016); Member, Awards Board (2015 to 2017); Member, Industry DSP Technology Standing Committee (2018 to 2020); Member, IEEE Fellow Committee (2018 to 2020); and Member, Japanese Delegation for ISO/IEC JTC1/SC29/WG11 (a.k.a. MPEG) in 1990-1994, 2002, and 2007-2008.

Dr. Sugiyama was Technical Program Chair, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2012); Far-East Liaison, ICASSP (2001); Tutorial Chair, Ecosystem Services for Poverty Alleviation Science Conference (ESPA) (2012); Special Session Chair, ICASSP (2017); Demonstration Session Chair, International Workshop on Acoustic Signal Enhancement (IWAENC) (2018); and Industry & Exhibition Chair, IEEE International Symposium on Circuits and Systems (ISCAS) (2019).

Dr. Sugiyama received the 1987 IEICE (The Institute of Electronics, Information, and Communication Engineers of Japan) Shinohara Memorial Academic Encouragement Award; the Promotion Foundation for Electrical Science and Engineering Award (formerly Ohm Technology Award) in 2001, 2013, and 2019; the IEICE The Best Paper Award (2002); Incentive Award by the Japanese Society of Artificial Intelligence (2005); IEICE Achievement Award (2006 and 2018); Sankei Newspaper Award, Fuji-Sankei Business I, Advanced Technology Award (2010); Local Commendation for Invention in the Kanto Region (2011); Contribution Award of the Ichimura Industry Award (2013); Prize for Science and Technology (2014); Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology of Japan; and the Industrial Distinguished Leader Award (2017) by the Asia-Pacific Signal and Information Processing Association (APSIPA).

Dr. Sugiyama is Fellow of IEEE and IEICE. He served as the IEEE Signal Processing Society’s Distinguished Lecturer (2014 to 2015) and for the IEEE Consumer Electronics Society (2017 to 2018).

Akihiko (Ken) Sugiyama
Yahoo! Japan Research
a.sugiyama@ieee.orgE: a.sugiyama@ieee.org

Lecture Topics

  • 30 years of audio coding: How we arrived at audio playback on iphone and its underlying technology
  • Decade of personal robot development: from windows95 to a quad-core mobile processor
  • Signal enhancement in consumer products: single and multiple microphone solutions
  • Multichannel echo cancellation: discovery of the uniqueness problem and search for solutions
  • Phase: unexplored wilderness in signal enhancement
  • Mechanical noise suppression: new applications with new solutions; wind noise suppression: challenges and solutions
  • Wind noise suppression: challenges and solutions
  • History of personal media terminals: from walkman to apple watch
  • Hearable devices: new directions with new functions
  • What i wish i knew when i was an entry level engineer
  • A long and winding road: confucius’ life and an engineer’s career
  • Easy and lazy technical writing for engineers and scientists: a step-by-step guide to establish a solid logical structure
  • Unveil the principle behind a solution through technical writing


2019 Distinguished Industry Speakers

Sergio Goma

Sergio Goma (SM) received the B.S., M.S. and PhD degrees from the “Univesitatea Politehnica Timisara” University, Timisoara, Romania, in 1994, 1995 and 1998 respectively, all in computer engineering. Since 2008, Dr. Goma is the Senior Director of Technology at Qualcomm Technologies where he leads the R&D and standardization group for imaging. Previously, Dr. Goma was the Principal Member of Technical Staff at AMD and prior to AMD, Dr. Goma was the Architect of the imaging processing solution present in the Imageon series of chips at ATI.

Dr. Goma was a technical subgroup chair of CPIQ (Camera Phone Image Quality) – currently IEEE P1858, building and standardizing image quality test metrics and methodologies across the industry to correlate objective test results with human perception, and combine this data into a meaningful consumer rating system.

Dr. Goma was the technical lead for the AMD and ATI contributions to the standardization of serial camera interface MIPI CSI-2SM which is the most widely used camera interface in the mobile industry. It has achieved widespread adoption for its ease of use and ability to support a broad range of high-performance applications, including 1080p, 4K, 8K and beyond video, and high-resolution photography. His technical solutions were accepted as the core technologies for both MIPI-CSI2 and MIPI-DSI (e.g. MIPI specific ECC code, system definition and example implementation of the MIPI-CSI2, etc) becoming the foundation of MIPI-CSI2. Also, he drove the interoperability implementations of MIPI-CSI2 across industry, being a co-author of the AMD/ATI CSI2 implementation.

Before ATI, Dr. Goma was designing high performance CCD and CMOS cameras and imaging systems used in metrology and industrial automation in companies like Photon Dynamics (currently Orbotech), Taymer Industries and others. Dr. Goma serves as Associate Editor, IEEE Transactions on Image Processing (2013-Present); IEEE Transactions on Computational Imaging (2015-Present); and Springer Journal of Real-Time Image Processing (2009-Present); Committee Member, IS&T’s HVEI – Human Vision and Electronic Imaging Conference (2008-Present); and PMIIC - Photography, Mobile and Immersive Imaging Conference (2007-Present). He has served as Vice President of Society, Imaging Science and Technology (2014-2018); Committee Member, Computational Imaging Special Interest Group since its inception in 2015 until it became the Computational Imaging Technical Committee where he continues to serve as Associate Member.

Dr. Goma has received the 2014 Society for Imaging and Technology Service Award for serving as the 2014 Electronic Imaging Symposium Chair and for leading the efforts for IS&T sole management of the Symposium beginning in 2016.

Dr. Goma’s research interests include programmable hardware architectures and hardware accelerators for imaging, computational photography and integrated imaging sensors, image quality, image processing and computer vision.

Sergio Goma
E: sgoma@qti.qualcomm.com; sergiu_goma@ieee.org

Lecture Topics

  • Integrated Image Sensors with Compute Elements
  • Programmable Architectures for Image Processing

Neil Gordon

Neil Gordon (SM) received a BSc in Mathematics and Physics from the University of Nottingham in 1988 and a PhD in Statistics from Imperial College, University of London in 1993. He was with the Defence Evaluation and Research Agency in the UK from 1988-2002 working on statistical signal processing for a range of Defence applications. In 2002, he moved to Defence Science and Technology, part of the Department of Defence in Australia where he currently leads the Intelligence Analytics branch. Dr. Gordon’s team of scientists and engineers conduct research and development to improve the situation awareness of intelligence analysts by extracting, fusing and disseminating meaningful content from a wide range of diverse data sources and types. He provides expert scientific and technical advice to the Department of Defence and other Commonwealth agencies. He led the DST team providing scientific advice to the Australian Transport Safety Bureau MH370 search.

Dr. Gordon is an IEEE Senior Member and Fellow of the Royal Statistical Society. He was Associate Editor, IEEE Signal Processing Letters (2015-2017) and has given invited plenary lectures at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2015) and the International Fusion Conference (2004 and 2018). He has co-authored 3 books, 4 book chapters and over 90 peer reviewed journal articles and conference papers

Dr. Gordon’s main areas of research are particle filters and Bayesian methods for nonlinear estimation, data and information fusion in distributed sensor networks.

Neil Gordon
E: neil.gordon@dst.defence.gov.au

Lecture Topics

  • Beyond the Kalman Filter : 25 Years of Particles and Other Random Points
  • Signal Processing and the Search for MH370

Pedro J. Moreno

Pedro J. Moreno (M) completed his Ph.D. degree in Electrical and Computer Engineering at Carnegie Mellon University in 1996, and his Telecommunications Engineering degree at Universidad Politecnica de Madrid, Spain in 1986. Prior to joining Google, Dr. Moreno held a research scientist position at HP Labs where he led research in audio mining and search.

Dr. Moreno leads a team of 50 engineers and scientist at the languages modeling group, part of the Speech team at Google. His team is in charge of deploying speech recognition services in all supported languages, and improving their quality. His team has pioneered the use of context signals in speech recognition systems. Dr. Moreno has been involved in building the google technology behind every voice-activated application that is used by billions of users in over 100 languages every day.

Dr. Moreno’s research interests include signal processing, machine learning, statistical modeling with applications to speech processing. He has published over 100 well-cited publications in several conferences and journals and holds several patents to his name.

Pedro Moreno
E: pedro@google.com; pmoreno@gmail.com

Lecture Topics

  • Speech Recognition
  • Language Modeling
  • Voice Assistants

Ashish Pandharipande

Ashish Pandharipande (SM) received the B.E. degree in Electronics and Communications Engineering from Osmania University, Hyderabad, India, in 1998, the M.S. degrees in Electrical and Computer Engineering, and Mathematics, and the Ph.D. degree in Electrical and Computer Engineering from the University of Iowa, Iowa City, in 2000, 2001 and 2002, respectively.

Dr. Pandharipande has since been a post-doctoral researcher at University of Florida, senior researcher at Samsung Advanced Institute of Technology, and senior scientist at Philips Research. He has held visiting positions at AT&T Laboratories, NJ, and the Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, India. He is currently Lead R&D Engineer at Signify (new company name of Philips Lighting), in Eindhoven, The Netherlands.

Dr. Pandharipande has served as Associate Editor, IEEE Transactions on Signal Processing (2012-2015); Associate Editor, IEEE Sensors Journal (2012-present), IEEE Signal Processing Letters (2016-present) and IEEE Journal of Biomedical and Health Informatics (2014-present); and Member, International Advisory Board, Lighting Research & Technology Journal (2010-present).

Dr. Pandharipande’s research interests are in sensing, wireless communications, controls, data analytics, and signal processing applications in domains like smart lighting, energy monitoring and control, and cognitive spectrum sharing. He has more than 160 scientific publications and about 90 patents/filings in these areas.

Ashish Pandharipande
E: ashish.p@signify.com; pashish@ieee.org

Lecture Topics

  • Sensing Technologies and Applications in Smart Lighting and Beyond
  • Sensor-Driven Smart Lighting Controls
  • Machine Learning in Connected Lighting: Applications and Opportunities
  • Sensing, Visible Light Communication and Illumination Control in LED Lighting Systems

Tao Zhang

Tao Zhang (SM) attended Nanjing University, Nanjing, China from 1982 to 1986 and received his B.S. degree in physics in 1986; attended Peking University, Beijing, China from 1986 to 1989 and received his M.S. degree in electrical engineering in 1989; attended the Ohio-State University, Columbus, OH, USA from 1991 to 1995 and received his Ph.D. degree in speech and hearing science in 1995. He joined the Advanced Research Department at Starkey Laboratories, Inc. as a Sr. Research Scientist in 2001, managed the DSP department at Starkey Laboratories, Inc. from 2004 to 2008 and the Signal Processing Research Department at Starkey Laboratories, Inc. from 2008 to 2014. He has been Director of the Signal Processing Research department at Starkey Hearing Technologies since 2014.

Dr. Zhang is a Senior Member of IEEE and the Engineering in Medicine and Biology Society. He is a Member, IEEE Audio and Acoustics Signal Processing Technical Committee (2014-present); Member, IEEE Industrial Relationship Committee (2014-present); Member, IEEE ComSoc North America Region Board (2018-present) and the IEEE Industry Convoy for the Unites States, Region 1-6 (2017-present). He is the Chair of IEEE Twin-Cities Signal Processing and Communication Chapters (2013-present).

Since 2001, Dr. Zhang has been actively promoting education and research in the field of hearing research and technology in the global research community. He has hosted many IEEE distinguished lectures and organized many joint IEEE and Starkey research seminars in hearing research and technology. He has been a sponsor for best paper award and other student awards for ICASSP, EUSIPCO, WASPAA, HSCMA and EMBC. He is the organizer of global research reception for audio, acoustic and speech signal processing researchers for hearing instruments at ICASSP 2013, 2015 and 2016. Finally, Dr. Zhang has received many prestigious awards from Starkey Hearing Technologies including the Outstanding Technical Leadership Award (2003), the Engineering Service Award (2007), the Most Valuable Idea Award (2009), the Mount Rainier Best Research Team Award (2016) and Inventor of the Year Award (2018).

Dr. Zhang’s current research interests include audio, acoustic, speech and music signal processing, multimodal signal processing and machine learning for hearing enhancement and health and wellness monitoring, psychoacoustics, room and ear canal acoustics, ultra-low power embedded system design and real-time fixed-point DSP algorithm design. He has authored over 120 presentations and publications, received 20+ approved patents and had additional 30+ patents pending.

Tao Zhang
E: tzhang28@ieee.org

Lecture Topics

  • Tackling The Cocktail Party Problem For Hearing Devices
  • Intelligent Hearing Aids: The Next Revolution
  • Robust and Practical Acoustic Feedback Control For Hearing Devices
  • Practical Challenges, Current Solutions and Future Directions For Hearing Devices
  • Multimodal Signal Processing and Machine Learning For Multipurpose Ear-Level Devices

SPS on Twitter

SPS Videos

Signal Processing in Home Assistants


Multimedia Forensics

Careers in Signal Processing             


Under the Radar