1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Depth information is essential for many computer vision applications, such as autonomous navigation, virtual reality, and 3D reconstruction. Currently, many low-cost but low-resolution (LR) depth sensors are widely used, and they are generally embodied with high-resolution (HR) color sensors –. Therefore, many studies have examined the super-resolution (SR) of depth images using HR color images as guidance –, –.
As in the case with other computer vision tasks, convolutional neural networks (CNNs) are currently dominating depth SR –. Motivated by classical color-guided depth SR methods –, CNN-based methods attempt to use color features such that the resultant HR depth image can inherit the HR details of the color image. The main difference in the existing CNN-based methods involves the use of color features for depth SR.
Hui et al.  achieved promising SR results by using a multi-scale guided network (MSG-Net) that used the HR intensity image as guidance to complement depth features. Considering that the edges play the most important role in depth SR, Wen et al.  developed a handcrafted preprocessing method that refined the initially interpolated depth map using the corresponding color pixels to preserve details. On the other hand, some approaches inferred the HR depth edge map using the HR color image and the LR depth map, and used the HR edge map as the supplementary information to refine the boundaries of the depth map , . To further improve the performance, Guo et al.  employed rich hierarchical features from the HR intensity image through the residual U-Net  architecture. Moreover, with the HR intensity image, Zuo et al.  progressively upscaled the LR depth map with global and local residual learning. In addition, Zhao et al.  simultaneously increased the resolution of the color image and the depth map using the generative adversarial network . Most of the aforementioned methods –, – used the guide features simply by concatenating them with the depth features. Therefore, the depth SR network needs to learn how to use two types of features properly, which remains a challenge. Indeed, the sharpness of the depth boundaries can be lost without sufficient usage of the guide features. Otherwise, if the guide features are used excessively in flat areas, unnecessary high frequency color details can be transferred from the guide image to the depth map.