The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
News and Resources for Members of the IEEE Signal Processing Society
PhD Title: Low-dimensional Representation of Visual Dynamics towards Animal/Human Behavior Monitoring
By: Behnaz Rezaei, Augmented Cognition Lab (ACLab), ECE Department, Northeastern University
Advisor: Prof. Sarah Ostadabbas
Abstract: The human visual system has a unique ability to conceptualizing the dynamics of objects' interactions in a scene. We are not only able to detect objects' motion (including articulated and deformable ones, such as humans and animals) in a given scene but also distinguish different types of motion patterns in their bodies during various interactive actions. Working in the area of arterial intelligence (AI) this question arises that how machines can achieve the capacity to visually discover objects and their actions by perceiving their motion dynamics with minimum supervision from human annotations. In this dissertation, we discuss algorithms for visual understanding and predicting certain objects' actions by detecting them without supervision, and observing how they move over time and space.
Towards achieving this ultimate objective, in this research, we focus on learning interpretable representations of real-world object interactions from human/animal-involved action video segments. We first study the problem of learning to segment moving objects from their background without supervision. We present a class of low-rank background modeling methods for segmenting moving objects in videos. Unlike the prior methods in the sparse representation of the moving objects which suffer from heavy computational procedures, we develop a computationally-efficient method based on the robust matrix completion framework which is fast yet robust to the scene dynamics and motion non-saliency. Further, we propose a universal model of the background in videos recorded from different scenes by equipping the low-rank representation of the background with the Bayesian non-linear manifold learning. The aim is to provide a more robust representation of the background dynamics which can be generalized to the unseen videos. Then, towards mimicking the human's capability in recognizing human/animal actions from observing the movement of their body keypoints, we study the problem of perceiving human actions by watching how they move. In this regard, we develop a target human action recognition algorithm via pose evolution maps applied to a special case of Parkinson's patients as the target.
Nomination/Position | Deadline |
---|---|
Call for Nominations: IEEE Technical Field Awards | 15 January 2025 |
Nominate an IEEE Fellow Today! | 7 February 2025 |
Call for Nominations for IEEE SPS Editors-in-Chief | 10 February 2025 |
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.