Data Challenges

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Data Challenges

(with available data challenges)
Past Challenges


Bio Imaging and Signal Processing

2019

The PALM challenge focuses on investigation and development of algorithms associated with diagnosis of Pathologic Myopia (PM) and segmentation of lesions in fundus photos from PM patients. Myopia is currently the ocular disease with the highest morbidity. About 2 billion people have myopia in the world, 35% of which are high myopia. High myopia leads to elongation of axial length and thinning of retinal structures. With progression of the disease into PM, macular retinoschisis, retinal atrophy and even retinal detachment may occur, causing irreversible impairment to visual acuity.



Applied Signal Processing Systems

2024

Supported by the SPS Challenge Program.

Accurate analysis of liver vasculature in three dimensions (3D) is essential for a variety of medical procedures including computer-aided diagnosis, treatment planning, or pre-operative planning of hepatic diseases.

Supported by the SPS Challenge Program

The George B. Moody PhysioNet Challenges are annual competitions that invite participants to develop automated approaches for addressing important physiological and clinical problems. The 2024 Challenge invites teams to develop algorithms for digitizing and classifying electrocardiograms (ECGs) captured from images or paper printouts. 



Image, Video, and Multidimensional Signal Processing

2024

Omnidirectional visual content, commonly referred to as 360-degree images and videos, has garnered significant interest in both academia and industry, establishing itself as the primary media modality for VR/XR applications. 

Video compression standards rely heavily on eliminating spatial and temporal redundancy within and across video frames. Intra-frame encoding targets redundancy within blocks of a single video frame, whereas inter-frame coding focuses on removing redundancy between the current frame and its reference frames.

Introducing ICASSP 2024 SPGC competition aiming at reconstructing skin spectral reflectance in the visible (VIS) and near-infrared (NIR) spectral range from RGB images captured by everyday cameras, offering a transformative approach for cosmetic and beauty applications. 

Supported by the SPS Challenge Program.

Accurate analysis of liver vasculature in three dimensions (3D) is essential for a variety of medical procedures including computer-aided diagnosis, treatment planning, or pre-operative planning of hepatic diseases.

Supported by the SPS Challenge Program

The George B. Moody PhysioNet Challenges are annual competitions that invite participants to develop automated approaches for addressing important physiological and clinical problems. The 2024 Challenge invites teams to develop algorithms for digitizing and classifying electrocardiograms (ECGs) captured from images or paper printouts. 

Speech-enabled systems often experience performance degradation in real-world scenarios, primarily due to adverse acoustic conditions and interactions among multiple speakers. Enhancing the front-end speech processing technology is vital for improving the performance of the back-end systems. 

View synthesis is a task of generating novel views of a scene/object from a given set of input views. It is a challenging and important problem in computer vision and graphics, with significant applications in virtual and augmented reality, 3D reconstruction, video editing, and more.

2023

Point clouds (PC) are widely used for storing and transmitting 3D visual data in applications like virtual reality, autonomous driving, etc. To deal with the large size and complexity of point clouds, efficient compression methods have been developed, including MPEG standards and recent deep-learning-based approaches. To optimize and benchmark processing algorithms and codecs, point cloud quality metrics are crucial.

This challenge will perform the first comprehensive benchmark of the impact of a wide range of distortions on the performance of current object detection methods. The proposed database contains, in addition to the conventional real distortions, other synthesized photo-realistic distortions corresponding to real and very frequent scenarios often neglected in other databases despite their importance. The synthetic distortions are generated according to several types and severity levels with respect to the scene context.

The proliferation of Unmanned Aerial Vehicles (UAVs) such as drones has caused serious security and privacy concerns in the recent past. Detecting drones is extremely challenging in conditions where the drones exactly resemble a bird or any other flying entity and are to be detected under low visibility conditions.

In the ICIP 2023 Grand Challenge entitled "Automatic Detection of Mosquito Breeding Grounds", we consider the development of a video-analysis system for the automatic detection of objects commonly associated with mosquito foci: discarded tires, water tanks, buckets, puddles, pools, and bottles.

2022

Associated SPS Event: IEEE ICASSP 2022 Grand Challenge

The CORSMAL challenge focuses on the estimation of the capacity, dimensions, and mass of containers, the type, mass, and filling (percentage of the container with content), and the overall mass of the container and filling. The specific containers and fillings are unknown to the robot: the only prior is a set of object categories (drinking glasses, cups, food boxes) and a set of filling types (water, pasta, rice).

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

High Dynamic Range (HDR) imaging provides the ability to capture, manipulate and display real-world lighting. This is a significant upgrade from Standard Dynamic Range (SDR) which only handles up to 255 luminance values concurrently. While capture technologies have advanced significantly over the last few years, currently available HDR capturing sensors (e.g., smartphones) only improve the dynamic range by a few stops over conventional SDR. 

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

Intestinal parasitic infections remain among the leading causes of morbidity worldwide, especially in tropical and sub-tropical areas with more temperate climates. According to WHO, approximately 1.5 billion people, or 24% of the world’s population, are infected with soil-transmitted helminth infections (STH), and 836 million children worldwide required preventive chemotherapy for STH in 2020.

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

The perceptual quality of images/videos in the context of video surveillance has a very significant impact on high-level tasks such as object detection, identification of abnormal events, visual tracking, to name a few. Despite the development of advanced video sensors with higher resolution, the quality of the acquired video is often affected by some distortions due to the environment, encoding and storage technologies, which can only be avoided by employing of intelligent post-processing solutions.

Associated SPS Event: IEEE ICASSP 2022 Grand Challenge

MISP Challenge 2021 has been accepted as a Signal Processing Grand Challenge (SPGC) of ICASSP 2022!Please refer to more details of ICASSP 2022 SPGC.

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

KP Labs, together with ESA (European Space Agency) and partner QZ Solutions, has created an extraordinary challenge, as they will revolutionize the future of farming with the help of in-orbit processing. Maintaining farm sustainability through improving the agricultural management practices by the usage of recent advances in Earth observation and artificial intelligence has become an important issue nowadays. It can not only help farmers face the challenge of producing food at an affordable price, but can also be crucial step toward the planet-friendly agriculture.

2021

Associated SPS Event: IEEE MMSP 2021 Grand Challenge

This challenge is meant to consolidate and strengthen research efforts about image inpainting using structural guidance. We will prepare two tracks: image restoration (IR) and image editing (IE). In the IR track, we mask out random areas in an image and provide the edge maps within the areas to help restore the image.

2019

Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. In this workshop, we will introduce two new benchmarks for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the wild. Objects365 benchmark targets to address the large-scale detection with 365 object categories.

Face recognition in static images and video sequences captured in unconstrained recording conditions is one of the most widely studied topics in computer vision due to its extensive applications in surveillance, law enforcement, bio-metrics, marketing, and so forth. Recently, methodologies that achieve good performance have been presented in top-tier computer vision conferences (e.g., ICCV, CVPR, ECCV etc.) and great progress has been achieved in face recognition with deep learning-based methods.

We present a new large-scale dataset focusing on semantic understanding of person. The dataset is an order of magnitude larger and more challenge than similar previous attempts that contains 50,000 images with elaborated pixel-wise annotations with 19 semantic human part labels and 2D human poses with 16 key points. The images collected from the real-world scenarios contain human appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions.

Recent years have witnessed the great progress of the perception task such as image classification, object detection and pixel-wise semantic/instance segmentation. It is the right time to go one step further to infer the relations between the objects. Increasingly more efforts are devoted to relation prediction, such as the Visual Genome and Google Open Image challenge. There are mainly two differences between existing relation prediction works and PIC challenge.

Immense opportunity exists to make transportation systems smarter, based on sensor data from traffic, signaling systems, infrastructure, and transit.  Unfortunately, progress has been limited for several reasons — among them, poor data quality, missing data labels, and the lack of high-quality models that can convert the data into actionable insights There is also a need for platforms that can handle analysis from the edge to the cloud, which will accelerate the development and deployment of these models.

This Challenge solicits contributions that demonstrate efficient algorithms for point cloud compression. Moreover, new rendering schemes, evaluation methodologies, as well as publicly accessible point cloud content are encouraged to be submitted, in addition to the proposed compression solutions.

The main goal of the CARLA Autonomous Driving Challenge is to achieve driving proficiency in realistic traffic situations.

To effectively prevent dengue fever outbreak, cleaning up the breeding sites of the mosquitos is essential. This proposal provides labeled data for the various types of containers, and aims to build an object detection model for possible breeding sites. This way the inspectors can pinpoint the containers which hold stagnant water by digital camera images or live video, and thus improve the effectiveness of inspection and breeding site elimination.

This challenge is the 4th annual installment of International Challenge on Activity Recognition, previously called the ActivityNet Large-Scale Activity Recognition Challenge which was first hosted during CVPR 2016. It focuses on the recognition of daily life, high-level, goal-oriented activities from user-generated videos as those found in internet video portals.

Habitat Challenge is an autonomous navigation challenge that aims to benchmark and accelerate progress in embodied AI. In its first iteration, Habitat Challenge 2019 is based on the PointGoal task defined in Anderson et. al. We will have two tracks for the PointGoal task:

  1. RGB track: input modality for agent is RGB image.
  2. RGBD track: input modalities for agent are RGB image and Depth.

The aim of this challenge is to solicit original contributions addressing restoration of mobile videos that will help improve the quality of experience of video viewers and advance the state-of-the-art in video restoration. Although quality degradation of videos occurs in various phases (e.g., capturing, encoding, storage, transmitting, etc.), we simplify the problem in this challenge as a post-processing problem.

The SUMO Challenge targets the development of algorithms for comprehensive understanding of 3D indoor scenes from 360° RGB-D panoramas. The target 3D models of indoor scenes include all visible layout elements and objects complete with pose, semantic information, and texture. Algorithms submitted are evaluated at 3 levels of complexity corresponding to 3 tracks of the challenge: oriented 3D bounding boxes, oriented 3D voxel grids, and oriented 3D meshes. SUMO Challenge results will be presented at the 2019 SUMO Challenge Workshop, at CVPR.

We will organize the first Learning from Imperfect Data (LID) challenge on object semantic segmentation and scene parsing, which includes two competition tracks:

Track1: Object semantic segmentation with image-level supervision

Track2: Scene parsing with point-based supervision

Outdoor scenes are often affected by fog, haze, rain, and smog. Poor visibility in the atmosphere is due to suspended particles. This challenge is meant to consolidate research efforts about single image recovering in adverse weather, especially hazy and rainy days. The challenge consists of two tracks: Hazy Image Recovering (HIR) and Rainy Image Recovering (RIR). In both tracks the researchers are required to recover sharp images from give degraded (hazy and rainy) inputs.

Computer vision technologies have made impressive progress in recent years, but often at the expense of increasingly complex models needing more and more computational and storage resources. This workshop aims to improve the energy efficiency of computer vision solutions for running on systems with stringent resources, for example, mobile phones, drones, or renewable energy systems. Efficient computer vision can enable many new applications (e.g., wildlife observation) powered by ambient renewable energy (e.g., solar, vibration, and wind).

Automatic caption generation is the task of producing a natural-language utterance (usually a sentence) that describes the visual content of an image. Practical applications of automatic caption generation include leveraging descriptions for image indexing or retrieval, and helping those with visual impairments by transforming visual signals into information that can be communicated via text-to-speech technology. The CVPR 2019 Conceptual Captions Challenge is based on two separate test sets:

T1) a blind test set that participants do not have direct access to. 

This workshop will bring together the participants of the first Robotic Vision Challenge, a new competition targeting both the computer vision and robotics communities. The new challenge focuses on probabilistic object detection. The novelty is the probabilistic aspect for detection: A new metric evaluates both the spatial and semantic uncertainty of the object detector and segmentation system. Providing reliable uncertainty information is essential for robotics applications where actions triggered by erroneous but high-confidence perception can lead to catastrophic results.

Drones, or general UAVs, equipped with cameras have been fast deployed to a wide range of applications, including agricultural, aerial photography, fast delivery, and surveillance. Consequently, automatic understanding of visual data collected from these platforms become highly demanding, which brings computer vision to drones more and more closely. We are excited to present a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, named VisDrone, to make vision meet drones.

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

The goal of the joint COCO and Mapillary Workshop is to study object recognition in the context of scene understanding. While both the COCO and Mapillary challenges look at the general problem of visual recognition, the underlying datasets and the specific tasks in the challenges probe different aspects of the problem.

In-depth analysis of the state-of-the-art in video object segmentation.

 

As a continuous effort to push forward the research on video object segmentation tasks, we plan to host a second workshop with a challenge based on the YouTube-VOS dataset, targeting at more diversified problem settings, i.e., we plan to provide two challenge tracks in this workshop. The first track targets at semi-supervised video object segmentation, which is the same setting as in the first workshop. The second track will be a new task named video instance segmentation, which targets at automatically segmenting all object instances of pre-defined object categories from videos

The domain of image compression has traditionally used approaches discussed in forums such as ICASSP, ICIP and other very specialized venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and challenge will be the first computer-vision event to explicitly focus on these fields. Many techniques discussed at computer-vision meetings have relevance for lossy compression.

 

Continuing the series of Open Images Challenges, the 2019 edition will be held at the International Conference on Computer Vision 2019. The challenge is based on the V5 release of the Open Images dataset. The images of the dataset are very varied and often contain complex scenes with several objects (explore the dataset). This year the Challenge will be again hosted by our partners at Kaggle.

2017

Tremendous progress has been achieved in the way consumers and professionals capture, store, deliver, display and process visual content. Emerging cameras and displays allow for the capture and visualization of new and rich forms of visual data. A new activity of the JPEG Standardization Committee, called JPEG Pleno, intends to provide a standard framework to facilitate capture, representation and exchange of plenoptic content in omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities.



Information Forensics and Security

2022

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

The perceptual quality of images/videos in the context of video surveillance has a very significant impact on high-level tasks such as object detection, identification of abnormal events, visual tracking, to name a few. Despite the development of advanced video sensors with higher resolution, the quality of the acquired video is often affected by some distortions due to the environment, encoding and storage technologies, which can only be avoided by employing of intelligent post-processing solutions.

In collaboration with WIFS, TU Delft CYS is organizing a competition in the domain of signal processing, combining biometrics and secure computation. Specifically, competitors are tasked to create a private odor-based access control system that matches encrypted human samples with permitted encrypted samples on an external database. 

2020

Steganography is the art and science of hiding data within media while its counterpart, steganalysis, refers to all methods that aims at detecting media used to transmit hidden data. Those two form a cat-and-mouse game since steganography aims at modifying the media to remain as stealth as possible and steganalysis aims at fight back by improving detection accuracy.  Most of the work in those fields are based on specific dataset and application in the “real world” of current state-of-the-art steganalysis techniques is hardly possible.

2010

The 2nd BOWS Contest (Break Our Watermarking System) was organised within the activity of the Watermarking Virtual Laboratory (Wavila) of the European Network of Excellence ECRYPT (http://www.ecrypt.eu.org/) between the 17th of July 2007 and 17th of April 2008. 

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel