Industry Leaders in Signal Processing and Machine Learning: Dr. Meredith Ringel Morris

You are here

Inside Signal Processing Newsletter Home Page

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

News and Resources for Members of the IEEE Signal Processing Society

Industry Leaders in Signal Processing and Machine Learning: Dr. Meredith Ringel Morris

By: 
Hamid Palangi

Meredith Ringel MorrisDr. Meredith Ringel Morris is Director of People + AI Research at Google. Prior to joining Google Research, Dr. Morris was Research Area Manager for Interaction, Accessibility, and Mixed Reality at Microsoft Research, where she founded Microsoft’s Ability research group. She is also an Affiliate Professor at the University of Washington in the Allen School of Computer Science & Engineering and in The Information School. Her research on collaboration and social technologies has contributed new systems, methods, and insights to diverse areas of computing including gesture interaction, information retrieval, accessibility, and human-centered AI. Dr. Morris earned her Sc.B. in Computer Science from Brown University and her M.S. and Ph.D. in Computer Science from Stanford University. She is an ACM Fellow and a member of the ACM SIGCHI Academy.

We approached Meredith Ringel Morris with a few questions:

1. What challenges have you had to face to get to where you are today? 

Not having the knowledge that computer science existed as a discipline my high school didn’t offer computer programming courses. The only computer course I could take was one that taught touch-typing skills, though that was quite useful–I can touch-type over 100 WPM. I was fortunate to be selected to attend the Pennsylvania Governor’s School for the Sciences (PGSS) the summer before my senior year; this was a program at Carnegie Mellon University that allowed local students interested in STEM to do college-level coursework over the summer at no cost, and this was where I first learned about computer programming. If I had not tried programming at PGSS, I would never have thought to study computer science in college.

A challenge that I continue to experience is sub-discipline bias regarding Human-Computer Interaction (HCI) as a field within computing. Hopefully, I think there is a growing awareness that ensuring that computing systems are usable by and useful to a broad range of people is a critical area of computing research. For instance, there is a growing interest in human-centered AI, which brings methods from HCI to bear on AI technologies, including technical methods (e.g., for designing novel human-AI interactions and increasing transparency via Explainable AI techniques) and socio-technical methods (e.g., for understanding the impacts of AI and associated ethics and fairness considerations).

2. What was the most important factor in your success? 

Having great mentors has been an important factor. My undergraduate mentor Andy van Dam, my graduate advisor Terry Winograd, and my mentors at Microsoft–Eric Horvitz and Susan Dumais–were all critical to my success and growth as a researcher. 

Also, having great colleagues has also been important. I’m using “I” pronouns instead of “we” pronouns since this is an interview, but all of the research I’m discussing has been a team effort. I have been so fortunate to be able to collaborate with my colleagues on the PAIR team at Google Research, on the Ability team at Microsoft Research, and the more than 50 graduate students I’ve mentored at those companies and at the University of Washington over the past 16 years.

3. How does your work affect society? 

At Google, I lead the People + AI Research team (PAIR). PAIR’s mission is to conduct human-centered research and design to make people + AI partnerships productive, enjoyable, and fair. PAIR’s foundational research and engineering efforts are a key part of Google’s Responsible AI effort. We release a variety of public materials and open-source projects to support AI practitioners and researchers. For example, the People + AI Guidebook provides a set of methods and best practices for designing with AI, including case studies and workshop materials; the Know Your Data tool supports understanding large scale datasets, which can help improve data quality and mitigate fairness and bias challenges; and the Language Interpretability Tool supports understanding of NLP models through interactive visualization techniques.

Prior to joining Google Research, I founded the Ability Team at Microsoft Research. The Ability Team conducts user-centered research to advance the state of the art in accessible technologies that empower people with long-term, temporary, or situational disabilities. For instance, my research at Microsoft included exploring how to make Mixed Reality accessible to people with varied levels of vision, hearing, and/or mobility, as well as how to improve the quality of automatically-generated image descriptions

I’ve merged my interest in accessibility research with PAIR’s focus on human-AI partnerships by contributing to projects at Google on how language models like LaMDA can facilitate communication for people with dyslexia and for people with ALS who rely on augmentative and alternative communication technologies.

4. If there is one take home message you want the readers of this interview have what would it be?

My mantra is “Computer Science is people.” Yes, this is a spoof on the famous line from Soylent Green because I am, after all, a nerd. But seriously, back to my earlier remark about sub-discipline bias regarding HCI – I would argue that not only is HCI definitely computer science, but it is central to computer science – there is no purpose to creating faster, smaller, more secure, more powerful computing systems if they are not meeting the needs of end-users and aligning to our values as a society. And I believe that increasing the diversity of who is creating the next generation of computing technologies – including both disciplinary and demographic diversity – is critical for ensuring that all of the subfields within computing are making advances that address the right challenges.  

5. Failures are an inevitable part of everyone’s career journey, what is the most important lesson you have learned during your career when dealing with failures?

One challenge for researchers can often be “calling” a failure, i.e. making that decision of when to sunset a project that isn’t working out and move on. I think this is a challenging skill to learn and something I still struggle with sometimes. As researchers we have a lot of passion for our work and become very attached to ideas. But “failing fast” is an important skill; it is wonderful that research careers allow us to take risks on novel ideas – if some of them weren’t failures, it would mean we weren’t being bold and risky enough.

6. Although novelty and innovation is the most important factor for technology advancement, when a researcher, scientist or engineer has a new idea there are a lot of push backs until they prove the new idea actually works. What is your advice on how to handle them? Especially for the readers who are in the early stages of their career.

I know that the paper review process can be particularly discouraging for early-career researchers, and sometimes it can be difficult to know when to take a critique to heart vs. when a reviewer is off-base in their assessment. My personal rule of thumb is that if a single person makes a comment, perhaps it is just a case of “Reviewer #2 Syndrome,” but if two people make a similar comment then it is time to revisit my assumptions about the work. I give more detailed advice about responding to critical reviews in my “rebuttal writing guide,” which I’ve heard that many students find helpful.  


To learn more about Meredith Ringel Morris and for more information, visit the webpage.

 

 

 

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel