Skip to main content

Open Images Challenge

Continuing the series of Open Images Challenges, the 2019 edition will be held at the International Conference on Computer Vision 2019. The challenge is based on the V5 release of the Open Images dataset. The images of the dataset are very varied and often contain complex scenes with several objects (explore the dataset). This year the Challenge will be again hosted by our partners at Kaggle.

The 2nd Large-scale Video Object Segmentation (VOS) Challenge

As a continuous effort to push forward the research on video object segmentation tasks, we plan to host a second workshop with a challenge based on the YouTube-VOS dataset, targeting at more diversified problem settings, i.e., we plan to provide two challenge tracks in this workshop. The first track targets at semi-supervised video object segmentation, which is the same setting as in the first workshop. The second track will be a new task named video instance segmentation, which targets at automatically segmenting all object instances of pre-defined object categories from videos

Joint COCO and Mapillary Recognition Challenge

The goal of the joint COCO and Mapillary Workshop is to study object recognition in the context of scene understanding. While both the COCO and Mapillary challenges look at the general problem of visual recognition, the underlying datasets and the specific tasks in the challenges probe different aspects of the problem.

Vision Meets Drones: A Challenge

Drones, or general UAVs, equipped with cameras have been fast deployed to a wide range of applications, including agricultural, aerial photography, fast delivery, and surveillance. Consequently, automatic understanding of visual data collected from these platforms become highly demanding, which brings computer vision to drones more and more closely. We are excited to present a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, named VisDrone, to make vision meet drones.

Conceptual Captions Challenge

Automatic caption generation is the task of producing a natural-language utterance (usually a sentence) that describes the visual content of an image. Practical applications of automatic caption generation include leveraging descriptions for image indexing or retrieval, and helping those with visual impairments by transforming visual signals into information that can be communicated via text-to-speech technology. The CVPR 2019 Conceptual Captions Challenge is based on two separate test sets:

T1) a blind test set that participants do not have direct access to. 

Habitat Challenge 2019

Habitat Challenge is an autonomous navigation challenge that aims to benchmark and accelerate progress in embodied AI. In its first iteration, Habitat Challenge 2019 is based on the PointGoal task defined in Anderson et. al. We will have two tracks for the PointGoal task:

  1. RGB track: input modality for agent is RGB image.
  2. RGBD track: input modalities for agent are RGB image and Depth.

AI City Challenge

Immense opportunity exists to make transportation systems smarter, based on sensor data from traffic, signaling systems, infrastructure, and transit.  Unfortunately, progress has been limited for several reasons — among them, poor data quality, missing data labels, and the lack of high-quality models that can convert the data into actionable insights There is also a need for platforms that can handle analysis from the edge to the cloud, which will accelerate the development and deployment of these models.

Look Into Person (LIP) Challenge

We present a new large-scale dataset focusing on semantic understanding of person. The dataset is an order of magnitude larger and more challenge than similar previous attempts that contains 50,000 images with elaborated pixel-wise annotations with 19 semantic human part labels and 2D human poses with 16 key points. The images collected from the real-world scenarios contain human appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions.