Data Challenges

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Data Challenges

(with available data challenges)
Past Challenges


Bio Imaging and Signal Processing

2023

Epilepsy is one of the most common neurological disorders, affecting almost 1% of the population worldwide. The categorization of seizures is usually made based on the seizure onset zone (area of the brain where the seizure initiates) the progression of the seizure and the awareness status of the patient that experience the seizure. Focal onset seizures are the most common type of seizures in adults with epilepsy.

Various neuroimaging techniques can be used to investigate how the brain processes sound. Electroencephalography (EEG) is popular because it is relatively easy to conduct and has a high temporal resolution. An increasingly popular method in these fields is to relate a person’s electroencephalogram (EEG) to a feature of the natural speech signal they were listening to. This is typically done using linear regression or relatively simple neural networks to predict the EEG signal from the stimulus or to decode the stimulus from the EEG.

2022

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

Intestinal parasitic infections remain among the leading causes of morbidity worldwide, especially in tropical and sub-tropical areas with more temperate climates. According to WHO, approximately 1.5 billion people, or 24% of the world’s population, are infected with soil-transmitted helminth infections (STH), and 836 million children worldwide required preventive chemotherapy for STH in 2020.

2021

Associated SPS Event: IEEE ICASSP 2021 Grand Challenge

Novel Coronavirus (COVID-19) has drastically overwhelmed more than 200 countries around the world affecting millions and claiming more than 1.5 million human lives, since its first emergence in late 2019. This highly contagious disease can easily spread, and if not controlled in a timely fashion, can rapidly incapacitate healthcare systems.

2020

Translational utility is the ability of certain biomedical imaging features to capture useful subject-level characteristics in clinical settings, yielding sensible descriptions and/or predictions for individualized treatment trajectory. An important step in achieving translational utility is to demonstrate the association between imaging features and individual characteristics, such as sex, age, and other relevant assessments, on a large out-of-sample unaffected population (no diagnosed illnesses). This initial step then provides a strong normative basis for comparison with patient populations in clinical settings. Detailed information. Website.

 

 

2019

In digital pathology, it is often useful to align spatially close but differently stained tissue sections in order to obtain the combined information. The images are large, in general, their appearance and their local structure are different, and they are related through a nonlinear transformation. The proposed challenge focuses on comparing the accuracy and approximative speed of automatic non-linear registration methods for this task. Registration accuracy will be evaluated using manually annotated landmarks.

Digital pathology has been gradually introduced in clinical practice. Although the digital pathology scanner could give very high resolution whole-slide images (WSI) (up to 160nm per pixel), the manual analysis of WSI is still a time-consuming task for the pathologists. Automatic analysis algorithms offer a way to reduce the burden for pathologists. Our proposed challenge will focus on automatic detection and classification of lung cancer using Whole-slide Histopathology. This subject is highly clinical relevant because lung cancer is the top cause of cancer-related death in the world.

CHAOS has two separate but related aims:

  1. Segmentation of liver from computed tomography (CT) data sets, which are acquired at portal phase after contrast agent injection for pre-evaluation of living donated liver transplantation donors (15 training + 15 test sets).
  2. Segmentation of four abdominal organs (i.e. liver, spleen, right and left kidneys) from magnetic resonance imaging (MRI) data sets acquired with two different sequences (T1-DUAL and T2-SPIR) (15 training + 15 test sets).

The goal of the challenge is to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). There are two tasks and therefore two leaderboards for evaluating the performance of the algorithms. Participants can choose to join both or either tasks according to their interests.

BraTS has always been focusing on the evaluation of state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. BraTS 2019 utilizes multi-institutional pre-operative MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas.

Skin cancer is the most common cancer globally, with melanoma being the most deadly form. Dermoscopy is a skin imaging modality that has demonstrated improvement for diagnosis of skin cancer compared to unaided visual inspection. However, clinicians should receive adequate training for those improvements to be realized.

Endoscopic Artefact Detection (EAD) is a core challenge in facilitating diagnosis and treatment of diseases in hollow organs. Precise detection of specific artefacts like pixel saturations, motion blur, specular reflections, bubbles and debris is essential for high-quality frame restoration and is crucial for realising reliable computer-assisted tools for improved patient care.

This challenge aims at creating an open and fair competition for various research groups to test and validate their methods, particularly for the multi-sequence ventricle and myocardium segmentation.

Diffusion MRI has emerged as a key modality for imaging brain tissue microstructural features, yet, validation is necessary for accurate and useful biomarkers. Towards this end, we present the two-year ISBI 2019/2020 diffusion Mri whitE Matter rEcoNstrucTiOn (MEMENTO) challenge. The first year is dedicated to designing the challenge, building the appropriate dataset(s), and making it available to the community. The challenge and participant submissions will take place in the second year, with the aim to evaluate and advance the state of the microstructural modeling field.

The aim of this challenge is to learn effective machine learning models that can estimate a set of clinical significant LV indices (regional wall thicknesses, cavity dimensions, area of cavity and myocardium, cardiac phase) directly from MR images. No intermediate segmentation is required in the whole procedure.

The PALM challenge focuses on investigation and development of algorithms associated with diagnosis of Pathologic Myopia (PM) and segmentation of lesions in fundus photos from PM patients. Myopia is currently the ocular disease with the highest morbidity. About 2 billion people have myopia in the world, 35% of which are high myopia. High myopia leads to elongation of axial length and thinning of retinal structures. With progression of the disease into PM, macular retinoschisis, retinal atrophy and even retinal detachment may occur, causing irreversible impairment to visual acuity.

The aim is to provide a formal framework for evaluating the current state of the art, gather researchers in the field and provide high quality data with protocols for validating endoscopic vision algorithms.

Computer assisted tools can provide cost effective and easily deployable solutions for cancer diagnostics. The aim of this challenge is to build a classifier for the identification of leukemic versus normal immature cells for while blood cancer, namely, B-ALL diagnostics. A dataset of cells with class labels, marked by the expert based on the domain knowledge, will be provided at the subject-level to train the classifier. This problem is interesting because the two cell types appear similar under the microscope and subject-level variability plays a key role.

In 2012, Cell Tracking Challenge (CTC) was launched to objectively compare and evaluate state-of-the-art whole-cell and nucleus segmentation and tracking methods using both real (2D and 3D) time-lapse microscopy videos of cells and nuclei, along with computer generated (2D and 3D) video sequences simulating nuclei moving in realistic environments. To address numerous requests for benchmarking only cell segmentation methods (without tracking), we are launching now a new time-lapse cell segmentation benchmark on the same datasets (plus one new dataset).



Image, Video, and Multidimensional Signal Processing

2023

Point clouds (PC) are widely used for storing and transmitting 3D visual data in applications like virtual reality, autonomous driving, etc. To deal with the large size and complexity of point clouds, efficient compression methods have been developed, including MPEG standards and recent deep-learning-based approaches. To optimize and benchmark processing algorithms and codecs, point cloud quality metrics are crucial.

This challenge will perform the first comprehensive benchmark of the impact of a wide range of distortions on the performance of current object detection methods. The proposed database contains, in addition to the conventional real distortions, other synthesized photo-realistic distortions corresponding to real and very frequent scenarios often neglected in other databases despite their importance. The synthetic distortions are generated according to several types and severity levels with respect to the scene context.

The proliferation of Unmanned Aerial Vehicles (UAVs) such as drones has caused serious security and privacy concerns in the recent past. Detecting drones is extremely challenging in conditions where the drones exactly resemble a bird or any other flying entity and are to be detected under low visibility conditions.

In the ICIP 2023 Grand Challenge entitled "Automatic Detection of Mosquito Breeding Grounds", we consider the development of a video-analysis system for the automatic detection of objects commonly associated with mosquito foci: discarded tires, water tanks, buckets, puddles, pools, and bottles.

2022

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

High Dynamic Range (HDR) imaging provides the ability to capture, manipulate and display real-world lighting. This is a significant upgrade from Standard Dynamic Range (SDR) which only handles up to 255 luminance values concurrently. While capture technologies have advanced significantly over the last few years, currently available HDR capturing sensors (e.g., smartphones) only improve the dynamic range by a few stops over conventional SDR. 

Associated SPS Event: IEEE ICASSP 2022 Grand Challenge

MISP Challenge 2021 has been accepted as a Signal Processing Grand Challenge (SPGC) of ICASSP 2022!Please refer to more details of ICASSP 2022 SPGC.

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

Intestinal parasitic infections remain among the leading causes of morbidity worldwide, especially in tropical and sub-tropical areas with more temperate climates. According to WHO, approximately 1.5 billion people, or 24% of the world’s population, are infected with soil-transmitted helminth infections (STH), and 836 million children worldwide required preventive chemotherapy for STH in 2020.

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

The perceptual quality of images/videos in the context of video surveillance has a very significant impact on high-level tasks such as object detection, identification of abnormal events, visual tracking, to name a few. Despite the development of advanced video sensors with higher resolution, the quality of the acquired video is often affected by some distortions due to the environment, encoding and storage technologies, which can only be avoided by employing of intelligent post-processing solutions.

Associated SPS Event: IEEE ICIP 2022 Grand Challenge

KP Labs, together with ESA (European Space Agency) and partner QZ Solutions, has created an extraordinary challenge, as they will revolutionize the future of farming with the help of in-orbit processing. Maintaining farm sustainability through improving the agricultural management practices by the usage of recent advances in Earth observation and artificial intelligence has become an important issue nowadays. It can not only help farmers face the challenge of producing food at an affordable price, but can also be crucial step toward the planet-friendly agriculture.

Associated SPS Event: IEEE ICASSP 2022 Grand Challenge

The CORSMAL challenge focuses on the estimation of the capacity, dimensions, and mass of containers, the type, mass, and filling (percentage of the container with content), and the overall mass of the container and filling. The specific containers and fillings are unknown to the robot: the only prior is a set of object categories (drinking glasses, cups, food boxes) and a set of filling types (water, pasta, rice).

2021

Associated SPS Event: IEEE MMSP 2021 Grand Challenge

This challenge is meant to consolidate and strengthen research efforts about image inpainting using structural guidance. We will prepare two tracks: image restoration (IR) and image editing (IE). In the IR track, we mask out random areas in an image and provide the edge maps within the areas to help restore the image.

2019

This Challenge solicits contributions that demonstrate efficient algorithms for point cloud compression. Moreover, new rendering schemes, evaluation methodologies, as well as publicly accessible point cloud content are encouraged to be submitted, in addition to the proposed compression solutions.

The main goal of the CARLA Autonomous Driving Challenge is to achieve driving proficiency in realistic traffic situations.

To effectively prevent dengue fever outbreak, cleaning up the breeding sites of the mosquitos is essential. This proposal provides labeled data for the various types of containers, and aims to build an object detection model for possible breeding sites. This way the inspectors can pinpoint the containers which hold stagnant water by digital camera images or live video, and thus improve the effectiveness of inspection and breeding site elimination.

This challenge is the 4th annual installment of International Challenge on Activity Recognition, previously called the ActivityNet Large-Scale Activity Recognition Challenge which was first hosted during CVPR 2016. It focuses on the recognition of daily life, high-level, goal-oriented activities from user-generated videos as those found in internet video portals.

Habitat Challenge is an autonomous navigation challenge that aims to benchmark and accelerate progress in embodied AI. In its first iteration, Habitat Challenge 2019 is based on the PointGoal task defined in Anderson et. al. We will have two tracks for the PointGoal task:

  1. RGB track: input modality for agent is RGB image.
  2. RGBD track: input modalities for agent are RGB image and Depth.

The aim of this challenge is to solicit original contributions addressing restoration of mobile videos that will help improve the quality of experience of video viewers and advance the state-of-the-art in video restoration. Although quality degradation of videos occurs in various phases (e.g., capturing, encoding, storage, transmitting, etc.), we simplify the problem in this challenge as a post-processing problem.

The SUMO Challenge targets the development of algorithms for comprehensive understanding of 3D indoor scenes from 360° RGB-D panoramas. The target 3D models of indoor scenes include all visible layout elements and objects complete with pose, semantic information, and texture. Algorithms submitted are evaluated at 3 levels of complexity corresponding to 3 tracks of the challenge: oriented 3D bounding boxes, oriented 3D voxel grids, and oriented 3D meshes. SUMO Challenge results will be presented at the 2019 SUMO Challenge Workshop, at CVPR.

We will organize the first Learning from Imperfect Data (LID) challenge on object semantic segmentation and scene parsing, which includes two competition tracks:

Track1: Object semantic segmentation with image-level supervision

Track2: Scene parsing with point-based supervision

Outdoor scenes are often affected by fog, haze, rain, and smog. Poor visibility in the atmosphere is due to suspended particles. This challenge is meant to consolidate research efforts about single image recovering in adverse weather, especially hazy and rainy days. The challenge consists of two tracks: Hazy Image Recovering (HIR) and Rainy Image Recovering (RIR). In both tracks the researchers are required to recover sharp images from give degraded (hazy and rainy) inputs.

Computer vision technologies have made impressive progress in recent years, but often at the expense of increasingly complex models needing more and more computational and storage resources. This workshop aims to improve the energy efficiency of computer vision solutions for running on systems with stringent resources, for example, mobile phones, drones, or renewable energy systems. Efficient computer vision can enable many new applications (e.g., wildlife observation) powered by ambient renewable energy (e.g., solar, vibration, and wind).

Automatic caption generation is the task of producing a natural-language utterance (usually a sentence) that describes the visual content of an image. Practical applications of automatic caption generation include leveraging descriptions for image indexing or retrieval, and helping those with visual impairments by transforming visual signals into information that can be communicated via text-to-speech technology. The CVPR 2019 Conceptual Captions Challenge is based on two separate test sets:

T1) a blind test set that participants do not have direct access to. 

This workshop will bring together the participants of the first Robotic Vision Challenge, a new competition targeting both the computer vision and robotics communities. The new challenge focuses on probabilistic object detection. The novelty is the probabilistic aspect for detection: A new metric evaluates both the spatial and semantic uncertainty of the object detector and segmentation system. Providing reliable uncertainty information is essential for robotics applications where actions triggered by erroneous but high-confidence perception can lead to catastrophic results.

Drones, or general UAVs, equipped with cameras have been fast deployed to a wide range of applications, including agricultural, aerial photography, fast delivery, and surveillance. Consequently, automatic understanding of visual data collected from these platforms become highly demanding, which brings computer vision to drones more and more closely. We are excited to present a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, named VisDrone, to make vision meet drones.

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

The goal of the joint COCO and Mapillary Workshop is to study object recognition in the context of scene understanding. While both the COCO and Mapillary challenges look at the general problem of visual recognition, the underlying datasets and the specific tasks in the challenges probe different aspects of the problem.

In-depth analysis of the state-of-the-art in video object segmentation.

 

As a continuous effort to push forward the research on video object segmentation tasks, we plan to host a second workshop with a challenge based on the YouTube-VOS dataset, targeting at more diversified problem settings, i.e., we plan to provide two challenge tracks in this workshop. The first track targets at semi-supervised video object segmentation, which is the same setting as in the first workshop. The second track will be a new task named video instance segmentation, which targets at automatically segmenting all object instances of pre-defined object categories from videos

The domain of image compression has traditionally used approaches discussed in forums such as ICASSP, ICIP and other very specialized venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and challenge will be the first computer-vision event to explicitly focus on these fields. Many techniques discussed at computer-vision meetings have relevance for lossy compression.

 

Continuing the series of Open Images Challenges, the 2019 edition will be held at the International Conference on Computer Vision 2019. The challenge is based on the V5 release of the Open Images dataset. The images of the dataset are very varied and often contain complex scenes with several objects (explore the dataset). This year the Challenge will be again hosted by our partners at Kaggle.

Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. In this workshop, we will introduce two new benchmarks for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the wild. Objects365 benchmark targets to address the large-scale detection with 365 object categories.

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel