1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
News and Resources for Members of the IEEE Signal Processing Society
Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.
1. In your own words, please tell us about your background.
I fell in love with AI and neural networks research in the early days of my graduate studies (circa 1985) as I was looking for a research topic. My PhD (1991) introduced novel machine learning architectures for modeling sequential data, combining convolutional neural networks, recurrent neural networks and probabilistic graphical models. My postdocs at MIT (1991-1992) and then at AT&T Bell Labs (1992-1993) broadened my network of collaborators (including for example Yann LeCun) and I realized and published about the fundamental challenge of learning long-term dependencies in recurrent architectures. In 1993, I took a faculty position at U. Montreal and started my own lab, which later grew to multiple professors at U. Montreal and more recently became the modern Mila, a large multi-university AI research institute (over 50 professors and 500 graduate students) mostly driven by researchers at U. Montreal and McGill University.
In the 2000s, I realized the theoretical and practical importance of distributed representations and depth in neural networks and contributed to the emergence of the field of deep learning. In the 2010s, my lab generated a number of crucial advances in deep learning, from generative models like GANs to soft-attention mechanisms -- which led to a revolution in machine translation, and later, thanks to the attention-based transformer architectures, to the current state-of-the-art in most natural language processing systems. These contributions to science led to the Turing Award I received in 2019 with Geoff Hinton and Yann LeCun, along with many other prizes. In the last few years, I turned my attention to limitations of current deep learning, taking inspirations from human cognition and research in causality: my current research program aims at incorporating these inductive biases in neural networks in order to enable the kind of out-of-distribution generalization robustness enjoyed by humans. I also invested substantial energy in responsible AI, both on the front of ethical principles and in terms of applications of machine learning to socially important challenges, e.g., in improving health and fighting climate change.
2. What challenges have you had to face to get to where you are today?
I have been very lucky in my life, and privileged in many ways. One challenge I had to face on a personal level when I was younger was my very poor socialization skills, but I got lucky and met people who helped me to learn rather than reject me. In terms of the relation of my work with the trends in the scientific community, I had the challenge during about 15 years of working on a topic which was out of fashion (neural networks), and publishing was not always easy, nor convincing my students to work on these topics rather than on the trendier stuff.
3. What was the most important factor in your success?
I believe that the most important factor in my success was my ability to cut myself out of the outside world and focus on abstract ideas and imagine solutions to problems, which would just pop into my mind. However, in my case this only worked with enough intellectual stimulation (from others) and enough moments when I could be immersed in my thoughts (with no outside interruption). The second most important factor was my self-confidence, which I owe mostly to how my parents raised me. And another important factor was my interaction with colleagues and collaborators (including students and postdocs) who inspired and stimulated me.
4. How does your work affect society?
Deep learning has transformed AI research and practice and is being deployed throughout society. This can be exhilarating but also scary because powerful technologies can bring a lot of good or be abused and dangerous for society. This is the reason why in the last 5-6 years I felt it was my duty to be so much involved in discussions about ethics and responsible development of AI, as well as working on AI for social good applications. I have realized that science cannot be completely separated from politics and economics, when technologies derived from science become weapons in some hands or life-saving tools in other hands.
5. If there is one take home message you would like the readers of this interview to take away, what would that be?
At the top of the hierarchy of human needs (like Maslow's) is the meaning that we give to our life, to our work, the mission we choose for ourselves. Beyond your salary and your social status, this is actually something that becomes more important (if the basic needs are covered), especially as you grow older and think about your life retrospectively (or imagine yourself older and thinking retrospectively about your life). And I found that in my professional life, the strongest bearers of satisfaction are the pursuit of truth and knowledge on one hand, as well as acting in ways that will help others, reduce suffering, improve the world, improve society. That is why I pursue long-term research goals, in my case to figure out intelligence, and that is why I get so much satisfaction in seeing the blooming of my graduate students and why I invest a significant fraction of my time in AI for social good projects.
6. Failures are an inevitable part of everyone’s career journey, what is the most important lesson you have learned during your career when dealing with failures?
There is a fine but important line to thread, between listening to one's inner voice and intuition -- which is what has allowed me to go through the more difficult periods where my ideas were not popular, and pursue a long-term vision -- and ignoring the signals -- like negative experimental results, or the arguments brought by others -- which may indicate that our intuitions and ideas are imperfect (and sometimes need to be revised completely). In other words, we need to listen to both our intuitions and to the signals indicating that they are wrong, setting our ego aside as much as possible.
7. Although novelty and innovation are the most important factor for technological advancement, when a researcher, scientist or engineer has a new idea, there are a lot of push backs until they prove that the new idea actually works. What is your advice on how to handle them? Especially for the readers who are in the early stages of their career.
My answers above provide some elements of my advice. Surround yourself with people who will recognize your work and your talent (by opposition to pushing you down), because lack of confidence kills our ability to listen to our intuition (especially when it contradicts what others are saying or is generating thoughts too different from the often repeated memes), and intuition is the source of most breakthroughs. Self-confidence should of course not blind us to evidence and reason (which the protection of our ego may lead us to do without consciously choosing to do so). Ego can also destroy our ability to discover truth, often because we identify personally with an idea, instead of thinking of ideas as objects we are lucky to encounter, but that we are free to let go as well. Similarly, consider the push-back as potentially containing important information. Don't brush it away. It is likely to contain some truth, even if inconvenient. At the end of the day, it is not about whose ideas won out, but whether and when truth emerges, and truth is often much more complex than what our thoughts may suggest. Maybe both your opponents and yourself are right in some sense.
8. Is there anything else that you would like to add?
AI is at a crossroads in many senses. We have accomplished amazing progress but the gap to human intelligence is still great, with unknowns laying on our path to understanding and building up intelligence. In spite of what some people think, we are far from being done with AI research. Scaling up will not be enough, I believe. That means that lots of exciting challenges are still ahead. Another kind of crossroads concerns AI innovation and its deployment. As technology becomes more powerful, we must find a way to become collectively and individually wiser to avoid self-destruction and to use these new tools in beneficial rather than nefarious ways. Or be much more careful (and maybe slower) in the way we deploy powerful technologies. If these technologies give each person on Earth the power to hurt everyone else, anger and hate (and thus the injustices and inequalities that drive them) must be imperatively tamed and reduced, or we will all suffer. Love must prevail. We have to realize that we are in this boat together: it can be a great source of satisfaction and save us from catastrophic outcomes if we do. Unfortunately, our society is not currently organized to face that reality and thus face challenges like pandemics or climate change. For example, governments should invest a lot more in AI for social good applications, when these applications are important for society, even when they are not commercially interesting. I am however optimistic that we can reform our social, economic and political organization to maximize our collective well-being, and we have to keep trying even if it seems a daunting task with progress being too slow.
To learn more about Yoshua Bengio and for more information, visit his webpage.
|Call for Nominations: IEEE Technical Field Awards||15 January 2024|
|Call for Officer Nominations: Vice President-Technical Directions||19 January 2024|
|Call for Nominations for IEEE SPS Editors-in-Chief||31 January 2024|
|Nominate an IEEE Fellow today!||7 February 2024|
© Copyright 2023 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.