Image, Video, and Multidimensional Signal Processing

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Outdoor scenes are often affected by fog, haze, rain, and smog. Poor visibility in the atmosphere is due to suspended particles. This challenge is meant to consolidate research efforts about single image recovering in adverse weather, especially hazy and rainy days. The challenge consists of two tracks: Hazy Image Recovering (HIR) and Rainy Image Recovering (RIR). In both tracks the researchers are required to recover sharp images from give degraded (hazy and rainy) inputs.

The aim of this challenge is to solicit original contributions addressing restoration of mobile videos that will help improve the quality of experience of video viewers and advance the state-of-the-art in video restoration. Although quality degradation of videos occurs in various phases (e.g., capturing, encoding, storage, transmitting, etc.), we simplify the problem in this challenge as a post-processing problem.

To effectively prevent dengue fever outbreak, cleaning up the breeding sites of the mosquitos is essential. This proposal provides labeled data for the various types of containers, and aims to build an object detection model for possible breeding sites. This way the inspectors can pinpoint the containers which hold stagnant water by digital camera images or live video, and thus improve the effectiveness of inspection and breeding site elimination.

This Challenge solicits contributions that demonstrate efficient algorithms for point cloud compression. Moreover, new rendering schemes, evaluation methodologies, as well as publicly accessible point cloud content are encouraged to be submitted, in addition to the proposed compression solutions.

Recent years have witnessed the great progress of the perception task such as image classification, object detection and pixel-wise semantic/instance segmentation. It is the right time to go one step further to infer the relations between the objects. Increasingly more efforts are devoted to relation prediction, such as the Visual Genome and Google Open Image challenge. There are mainly two differences between existing relation prediction works and PIC challenge.

Face recognition in static images and video sequences captured in unconstrained recording conditions is one of the most widely studied topics in computer vision due to its extensive applications in surveillance, law enforcement, bio-metrics, marketing, and so forth. Recently, methodologies that achieve good performance have been presented in top-tier computer vision conferences (e.g., ICCV, CVPR, ECCV etc.) and great progress has been achieved in face recognition with deep learning-based methods.

Continuing the series of Open Images Challenges, the 2019 edition will be held at the International Conference on Computer Vision 2019. The challenge is based on the V5 release of the Open Images dataset. The images of the dataset are very varied and often contain complex scenes with several objects (explore the dataset). This year the Challenge will be again hosted by our partners at Kaggle.

As a continuous effort to push forward the research on video object segmentation tasks, we plan to host a second workshop with a challenge based on the YouTube-VOS dataset, targeting at more diversified problem settings, i.e., we plan to provide two challenge tracks in this workshop. The first track targets at semi-supervised video object segmentation, which is the same setting as in the first workshop. The second track will be a new task named video instance segmentation, which targets at automatically segmenting all object instances of pre-defined object categories from videos

The goal of the joint COCO and Mapillary Workshop is to study object recognition in the context of scene understanding. While both the COCO and Mapillary challenges look at the general problem of visual recognition, the underlying datasets and the specific tasks in the challenges probe different aspects of the problem.

Drones, or general UAVs, equipped with cameras have been fast deployed to a wide range of applications, including agricultural, aerial photography, fast delivery, and surveillance. Consequently, automatic understanding of visual data collected from these platforms become highly demanding, which brings computer vision to drones more and more closely. We are excited to present a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, named VisDrone, to make vision meet drones.

Pages

SPS on Twitter

  • We are happy to welcome Prof. Jiebo Luo as the new Editor-in-Chief of IEEE Transactions on Multimedia beginning in… https://t.co/9ZgBrgkFXv
  • wants your talents! Our tenure-track position in engineering applications of information and data science a… https://t.co/QrqTAFGlyM
  • If you’re missing out on , don’t worry - we’ll be tweeting all week long. Follow along with us to see the action!

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar