Bayesian DeNet: Monocular Depth Prediction and Frame-Wise Fusion With Synchronized Uncertainty

You are here

IEEE Transactions on Multimedia

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Bayesian DeNet: Monocular Depth Prediction and Frame-Wise Fusion With Synchronized Uncertainty

By: 
Xin Yang; Yang Gao; Hongcheng Luo; Chunyuan Liao; Kwang-Ting Cheng

Using deep convolutional neural networks (CNN) to predict the depth from a single image has received considerable attention in recent years due to its impressive performance. However, existing methods process each single image independently without leveraging the multiview information of video sequences in practical scenarios. Properly taking into account multiview information in video sequences beyond individual frames could offer considerable benefits in terms of depth prediction accuracy and robustness. In addition, a meaningful measure of prediction uncertainty is essential for decision making, which is not provided in existing methods. This paper presents a novel video-based depth prediction system based on a monocular camera, named Bayesian DeNet . Specifically, Bayesian DeNet consists of a 59-layer CNN that can concurrently output a depth map and an uncertainty map for each video frame. Each pixel in an uncertainty map indicates the error variance of the corresponding depth estimate. Depth estimates and uncertainties of previous frames are propagated to the current frame based on the tracked camera pose, yielding multiple depth/uncertainty hypotheses for the current frame which are then fused in a Bayesian inference framework for greater accuracy and robustness. Extensive exper-iments on three public datasets demonstrate that our Bayesian DeNet outperforms the state-of-the-art methods for monocular depth prediction. A demo video and code are publicly available.

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel