Industry Leaders in Signal Processing and Machine Learning: Tomas Mikolov

You are here

Inside Signal Processing Newsletter Home Page

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

News and Resources for Members of the IEEE Signal Processing Society

Industry Leaders in Signal Processing and Machine Learning: Tomas Mikolov

By: 
Hamid Palangi

Tomas MikolovTomas Mikolov is the creator of Word2Vec which was developed in 2013 by a group of researchers led by him at Google Brain and has been widely used since then (28,000 citations). He is a pioneer for leveraging Recurrent Neural Networks (RNNs) for natural language which replaced n-gram language models that were state of the art for decades. He later moved to Facebook AI where he contributed to fastText project that currently includes word representations for 157 languages. He currently leads a research group at CIIRC in Prague focusing on developing mathematical models which can spontaneously evolve and increase in complexity.

 

 

We approached Tomas Mikolov with a few questions:

1. In your own words, please tell us about your background.

I did obtain my PhD from Brno University of Technology in 2012 for my work on recurrent neural networks and language models. I was always interested in the connection between language and intelligence, and I was very excited when my work did finally lead to a major improvement over the n-gram language models which were the state of the art for many decades. After finishing my PhD, I joined Google Brain team where I developed a popular algorithm for learning word representations (word2vec) and some of the earliest models for neural machine translation. In 2014, I moved to Facebook AI to focus more on fundamental AI research. More specifically, I was interested in understanding how learning from communication can be accomplished in cases where there are no well-defined rewards or supervision. However I also continued working on applied natural language processing (the fastText project). In 2020, I joined a research institute CIIRC in Prague to start a new research group focusing on developing mathematical models which can spontaneously evolve and increase in complexity, as I see the evolutionary principles as one possible path towards general AI.

2. What challenges you had to face to get where you are today? 

I think the most frequent challenge I did face during my career is the disbelief of others. When I wanted to work on recurrent neural networks, I was told that it has been proven these cannot be trained and I should just give up. When I wanted to apply neural networks to language, I received a terrible review of my work from a local NLP professor who wrote that all my ideas are rubbish and neural networks will never do anything useful for language. When I reported 50% perplexity reduction on difficult benchmarks, I was told by many scientists that I must have a bug in my setup because the improvement is too big. When I did show that one can form simple analogies using word vectors - the famous 'King - man + woman = Queen' example - there was also lots of disbelief at first. And when it finally became clear my results are not fake, I was told that I've been just lucky. I think the research community would be much more pleasant if there was a bit less of narcissism and a bit more of curiosity.

3. What was the most important factor in your success? 

I believe the most important is to not give up, and while it will sound very cheesy, to believe in yourself, to have your own goals and work on what you find the most exciting.

4. How does your work affect society? 

My ideas did propagate to the NLP community and to many products which are used by billions of people. Today, Google Translate is built around neural networks, which was my dream goal just some ten years ago - and something I started working on immediately after joining Google in 2012. To sum it up, my work did improve efficiency in how computers deal with natural language.

5. If there is one take home message you want the readers of this interview have what would it be? 

My message is that doing research is an exciting career, and the moments when you do something new for the first time ever are amazing. When I generated text from neural language models as a student in 2007, I knew I was observing something that nobody did see before me. The text was much more fluent than when generated from n-gram models, and I knew this is the future. Today, we cannot go to discover a new continent; however, one can have an amazing adventure when doing basic research.

6. Failures are an inevitable part of everyone’s career journey, what is the most important lesson you learned during your career when dealing with failures? 

That even when you fail, you may obtain something useful - just fail differently than the others. In fact, for most of my career I had very ambitious goals - like, to develop real AI - and when I failed to fulfill these goals, I sometimes ended up having something useful. Like the recurrent neural language models, or word2vec - I did not really aim for these right from the start, these models ended up as simplifications of more ambitious projects (which failed).

7. Although novelty and innovation is the most important factor for technology advancement, when a researcher, scientist or engineer has a new idea there are a lot of push backs until they prove the new idea actually works. What is your advice on how to handle them? Especially for the readers who are in early stages of their career.

That is quite a tough topic. On one hand, we live in the publish-or-perish world, which in my opinion discourages risky projects. On the other hand - we get to live only once, and if you play safe for all your career, you may end up having quite a boring one. So my advice would be: do what excites you the most, and don't worry so much about recognition from others. 

8. Anything else that you would like to add? 

Thanks for inviting me for the interview.


To learn more about Tomas Mikolov and for more information, visit his webpage.

 

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel