Low-dimensional Representation of Visual Dynamics towards Animal/Human Behavior Monitoring

You are here

Inside Signal Processing Newsletter Home Page

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

News and Resources for Members of the IEEE Signal Processing Society

Low-dimensional Representation of Visual Dynamics towards Animal/Human Behavior Monitoring

PhD Title: Low-dimensional Representation of Visual Dynamics towards Animal/Human Behavior Monitoring 

By: Behnaz Rezaei, Augmented Cognition Lab (ACLab), ECE Department, Northeastern University 

Advisor: Prof. Sarah Ostadabbas

Abstract: The human visual system has a unique ability to conceptualizing the dynamics of objects' interactions in a scene. We are not only able to detect objects' motion (including articulated and deformable ones, such as humans and animals) in a given scene but also distinguish different types of motion patterns in their bodies during various interactive actions. Working in the area of arterial intelligence (AI) this question arises that how machines can achieve the capacity to visually discover objects and their actions by perceiving their motion dynamics with minimum supervision from human annotations. In this dissertation, we discuss algorithms for visual understanding and predicting certain objects' actions by detecting them without supervision, and observing how they move over time and space.

Towards achieving this ultimate objective, in this research, we focus on learning interpretable representations of real-world object interactions from human/animal-involved action video segments. We first study the problem of learning to segment moving objects from their background without supervision. We present a class of low-rank background modeling methods for segmenting moving objects in videos. Unlike the prior methods in the sparse representation of the moving objects which suffer from heavy computational procedures, we develop a computationally-efficient method based on the robust matrix completion framework which is fast yet robust to the scene dynamics and motion non-saliency. Further, we propose a universal model of the background in videos recorded from different scenes by equipping the low-rank representation of the background with the Bayesian non-linear manifold learning. The aim is to provide a more robust representation of the background dynamics which can be generalized to the unseen videos. Then, towards mimicking the human's capability in recognizing human/animal actions from observing the movement of their body keypoints, we study the problem of perceiving human actions by watching how they move. In this regard, we develop a target human action recognition algorithm via pose evolution maps applied to a special case of Parkinson's patients as the target.

Table of Contents:

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel