Current approaches for human pose estimation in videos can be categorized into per-frame and warping-based methods. Both approaches have their pros and cons. For example, per-frame methods are generally more accurate, but they are often slow. Warping-based approaches are more efficient, but the performance is usually not good. To bridge the gap, in this paper, we propose a novel fast framework for human pose estimation to meet the real-time inference with controllable accuracy degradation in compressed video domain. Our approach takes advantage of the motion representation (called “motion vector”) that is readily available in a compressed video. Pose joints in a frame are obtained by directly warping the pose joints from the previous frame using the motion vectors. We also propose modules to correct possible errors introduced by the pose warping when needed. Extensive experimental results demonstrate the effectiveness of our proposed framework for accelerating the speed of top-down human pose estimation in videos.
Human pose estimation in videos is a cornerstone for many computer vision applications, such as smart video surveillance, human-computer interaction, virtual reality etc. It aims to seek for locations of human body joints (e.g. head, elbow and etc.) in video sequences. Current real-time solutions to this problem can be categorized into per-frame methods [7], [10], [11], [13], [19], [22], [24], [30], [32]–[35], [40], [46]–[48], [50], [53], [56] and warping-based methods [5], [14], [38], [43].