The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
News and Resources for Members of the IEEE Signal Processing Society
Vineeth N. Balasubramanian is an Associate Professor in the department of Computer Science and Engineering at the Indian Institute of Technology, Hyderabad (IIT-H), India, and currently also serves as the Head of the Department of Artificial Intelligence at IIT-H. His research interests include deep learning, machine learning, and computer vision. His research has resulted in over 100 peer-reviewed publications at various international venues, including top-tier ones such as ICML, NeurIPS, CVPR, ICCV, KDD, AAAI, etc. His PhD dissertation at Arizona State University on the Conformal Predictions framework was nominated for the Outstanding PhD Dissertation at the Department of Computer Science. He is an active reviewer/contributor at many conferences such as NeurIPS, CVPR, ICCV, AAAI, IJCAI as well as journals including IEEE TPAMI, IEEE TNNLS, JMLR and Machine Learning, with recent awards as Outstanding Reviewer at CVPR 2019 and ECCV 2020. He is also a recipient of the Teaching Excellence Award at IIT-H in 2017. His research is funded by various organisations including DST, MeiTY, DRDO, Microsoft Research, Adobe, Intel, and Honeywell. He currently serves as the Secretary of the AAAI India Chapter. For more details, please visit his page, https://iith.ac.in/~vineethnb/.
Q. Please share your current research work vis-a-vis relevance from Machine Learning context.
My group’s research interests lie at the intersection of the theory and application of machine learning - with a focus on applications in computer vision. With a strong interest in the mathematical fundamentals and a passion for real-world application, we do our best to carry out impactful research in the areas of deep learning, machine learning and computer vision, guided by application contexts derived from real-world use. From an algorithmic standpoint, our problems of interest in recent times have focused on two directions that are of significant contemporary interest to the broader machine learning community, academia and industry alike: (i) Explainable machine/deep learning, and (ii) Learning with limited supervision.
As the use of machine learning proliferates into risk-sensitive and safety-critical applications, it becomes essential to be able to explain the predictions of machine/deep learning models, to make them trustworthy. We are looking to address this from multiple perspectives: meaningful visual explanations of Convolutional Neural Networks (CNNs), exploring vision-language models for human-understandable explanations, as well as the complementarity of explainability and model robustness. Beyond these, we have been very interested in explaining machine learning models using perspectives of causality. As Judea Pearl states in his ladder of causation, the integration of causal inference in machine learning explores problem dimensions beyond standard prediction, which provides exciting perspectives to study.
The current success of machine learning is largely based on supervised learning, which in turn relies on the availability of large amounts of curated labeled data (especially in case of deep learning models). However, when we envision machine learning models revolutionizing a wide gamut of applications ranging from agriculture to governance, many problem domains - especially ones in which large commercial entities may not have business interests - suffer from paucity of labeled data. While one approach is to encourage collection and curation of large datasets, another approach is to devise and develop machine learning algorithms that can learn accurate models with limited labeled data. Developing such algorithms includes problem settings such as zero-shot learning, few-shot learning, continual learning, active learning, domain adaptation, and domain generalization, which are critical to be able to use machine learning even in data-deprived domains. This is another of our focus areas.
From an application standpoint, we work actively on applying machine learning models, including our own, on problems in agriculture, drone imagery, autonomous navigation and human behavior understanding.
Q. Would you please share any of your impactful work with us?
Our group has been able to contribute meaningfully in the broad areas of deep learning and computer vision in recent years, especially in our abovementioned focus areas. One of our first efforts in explainable AI (XAI), GradCAM++, published in IEEE WACV 2018, provided a methodology to provide saliency maps that are truthful to a CNN model, and can be used for both images and videos. This work also introduced some new evaluation metrics for saliency maps. Grad-CAM++ is used today by researchers around the world for explaining CNN models in various applications, such as finding defective cells in solar arrays, explaining cancer prediction on gene expression data, identification of pathogens in tomograms, leaf counting, genus classification in plant images, and more recently, COVID detection in chest X-ray images. We are happy to have contributed a method that is practically useful in many different applications. More recently, in ICML 2019, we developed a method to identify causal attributions of neural networks using first principles. This work is gradually garnering attention, and can be valuable when one would like to understand what causal attribution a trained neural network deployed in an application has actually learned. This is especially useful in risk-sensitive applications as in healthcare or aerospace. We also have other recent and ongoing contributions at the intersection of XAI and model robustness, as well as the intersection of causal inference and deep learning models, which we hope will be even more impactful in time to come.
Beyond the above, our other recent notable efforts include: (i) a very simple method for few-shot learning which focuses on learning the backbone network of a CNN in a more robust manner and beats all benchmarks significantly, and has gained some attention of researchers around the world; (ii) a zero-shot task transfer method that surpasses supervised models’ performance on transferring knowledge to new tasks with absolutely no ground truth, which works when tasks are reasonably correlated; (iii) a dataset, DaiSEE (https://iith.ac.in/~daisee-dataset/), for affective states In e-environments, especially learning environments which has become necessary in the ongoing pandemic situation, and many more. We have also had a recent focus on deep learning for agriculture where we have developed a method to estimate the heading date of paddy crop, as well as a recent edge computing tool, Easy-RFP, for plant phenotyping on the field. We are currently working on making this tool publicly available as a web tool or even a smartphone app that is actually used by practitioners on the field to monitor their crop. We look forward to researchers and consumers using more of our contributions.
I believe these are humble beginnings of more contributions to come from our group, which is filled with enterprising, talented and motivated students. Considering we are one of the second-generation IITs (which had limited resources and visibility when we started out), having our students succeed at the highest level including publications at top-tier venues, as well as receiving recognitions/offers such as Google AI Residency, Facebook AI Residency, Shastri Indo-Canadian Research Student Fellowship and many more – has been a heartening impact from a skilling perspective.
Q. In your opinion, what are some of the most exciting areas of research in ML for students and upcoming researcher?
Machine learning has evolved to be a very large area today covering mathematical foundations, applied topics in vision/language/speech/graph understanding, recent dimensions in fairness/accountability/transparency/ethics, as well as the intersection of ML with other disciplines such as physics, bioinformatics, civil engineering or for that matter, any other. In my opinion, each topic among these can be as exciting as you set it out to be. From a student or upcoming researcher perspective, it is important to choose an area of their interest and comfort, and start making contributions; interests will anyway meander as one gets more research taste and maturity. Resources are publicly available online, and it is important for a student to focus on one area for a reasonable period of time and make an impactful contribution. Of course, the topic has to be contemporary, which one can validate by following proceedings of top-tier conferences such as NeurIPS, ICML, CVPR, ACL, AAAI, etc, which are anyway in the public domain these days. If the student can find a suitable mentor that aligns with his/her interests and background, that would be ideal. Irrespective of the area of interest, choosing the right problem or question to study is very important, and a mentor or a peer group can be very useful for this purpose.
Having said this, I am personally enthused about the areas we work on – explainable/trustworthy machine learning, and learning with limited supervision. Both these areas are exciting with a lot of work to be done in the field, both foundationally as well as in terms of being impactful. Using domain knowledge for explaining deep learning models; developing mathematical foundations of XAI; intersection of causal inference and machine learning; the complementarity of explainability and robustness; leveraging self-supervised learning in its various forms for different downstream tasks and settings; unifying the perspectives of learning with varied levels of supervision – all of these are very interesting problems for students to work on.
Q. What challenges do you think our current student population faces as far as preparedness in ML is concerned?
Strangely, one of the biggest challenges the current student population seems to face, as I see it, is the abundance of resources. On one hand, while there are so many learning resources available online and even free, knowing where to start and how to navigate these resources – rather, curation of these resources for a wholesome learning experience is perhaps one of the biggest challenges confronting these students. A second challenge in my opinion is clarity of purpose. While learning AI/ML/Data Science has become a fad, it doesn’t become useful unless one is clear on one’s purpose with the learning. Be whatever the purpose – becoming a researcher, applying ML to industry problems, promoting social good through ML, or pedagogy in ML – being clear on what one wants to do with the knowledge helps learn appropriate content. For a country like India, we have an abundance of talent and hunger in students – but the number of mentors that can guide students to carry out impactful work at the highest level (be it research or development) is unfortunately few and not sufficient. I do hope that these challenges will be addressed in the coming years.
Q. There are many online courses available in ML. Can you please share your suggestions for students so that they are benefited most by doing the courses in this area via online learning?
As I just mentioned, a big challenge for students is the abundance of learning resources and the paucity of corresponding structure in the learning. Curation and sequencing of content is critical, and this is where strong brick-and-mortar academic institutions, despite the recent growth of e-learning platforms, are still looked up to. One should ideally start with mathematical foundations such as linear algebra and probability, followed by basic courses in machine learning and deep learning, and then go deeper into a specialization in applications or theory. The problem here is perhaps peer pressure and the propensity for quick rewards, where students sometimes want to learn everything in 6 months and get confused in that process. I personally think it does take time to get deep into a field, and staying vested in an area and learning/working on it for a reasonable period of time is essential. In certain cases, I would also recommend “multi-resolution learning” where one gets a coarse understanding of the subject first, and follows up with more reading and learning to continually refine one’s understanding. This could help certain students with ML, instead of a purely linear/sequential way of learning the subject. A mentor or a healthy peer group can once again be very useful in this context too.
From a different perspective, I do see many students from different academic training backgrounds wanting to quickly switch completely to ML these days. ML is useful across almost all domains today, and it may be wise for a student to explore the use of ML in his/her basic domain first, rather than jump completely from a different background to ML full-time. Subsequent opportunities may become difficult to come by in the latter case. Interdisciplinary ML is also interesting by itself, and one should build on his/her strengths while capitalizing on the AI/ML revolution.
Nomination/Position | Deadline |
---|---|
Call for Proposals: 2025 Cycle 1 Seasonal Schools & Member Driven Initiatives in Signal Processing | 17 November 2024 |
Call for Nominations: IEEE Technical Field Awards | 15 January 2025 |
Nominate an IEEE Fellow Today! | 7 February 2025 |
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.