Video & Image Processing Cup

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

IEEE Video and Image Processing Cup (VIP Cup)

The Video and Image Processing Cup (VIP Cup) competition encourages teams of students work together to solve real-world problems using video and image processing methods and techniques. Three final teams are chosen to present their work during ICIP to compete for the US$5,000 grand prize!

Technical Committees interested in submitting a call for proposal for upcoming VIP Cup competitions, please visit the Technical Committees page for more information.

 

IEEE VIP Cup 2020
Real-time vehicle detection and tracking at junction
using a fisheye camera

[Sponsored by the IEEE Signal Processing Society]

Organizers

Introduction

With the increasing growth of urbanization, it introduces traffic jams and congestion in several locations around the city. Apart from accidents, that may result in drastic average travel time increase from point A to point B in a city. Especially junctions are critical since delays and accidents tend to be concentrated at these places. Under these circumstances, intelligent traffic systems are unavoidable that are capable of tasks such as vehicle detection, tracking, violation detection and congestion control.

The 2020 VIP-cup challenge focuses on fisheye cameras mounted into street lamps at junctions and vehicle detection and tracking to be used for a junction management system to optimize the flow of traffic and synchronize with other junctions to obtain bottleneck performances throughout the city. Fisheye cameras are used since they tend to be promising in terms of reliability and scene coverage at a chosen junction. They provide 360 degrees of observation view, thus introducing key changes in traffic management.

Although fish eye cameras have a key role in junction management systems, accompanying challenges come with them as well, such as : High distortion ratios, Different scales of same target object moving in different parts of the image, Day/night views variance (night view suffers from low quality related to surrounding lightning conditions), Exposure introduced with vehicle lights (night view). A dataset of traffic videos from several junctions at different times during the day/night is provided with the annotation for training and validation (icip2020.issd.com.tr) . The evaluation will be performed based on separate test datasets.

 

Figure 1 Image, Sample View

 

Schedule

  • 25 June 2020: Initial Training Dataset released
  • 30 July 2020: Test Dataset 1 released
  • 30 August 2020: Submission deadline
  • 15 September 2020: Finalists (best three teams) announced
  • 25 October 2020: Competition on Test Dataset 2 virtually at ICIP 2020

Eligibility Criteria

Each team must be composed of:

  • One faculty member (the Supervisor);
  • At most one graduate student (the Tutor);
  • At least three but no more than ten undergraduate students (the Team Members)
  • At least three of the undergraduate team members must be either IEEE Signal Processing Society (SPS) members or SPS student members.
  • The VIP-Cup is a competition for undergraduate students and therefore Master’s students, regardless of the duration of their Bachelor’s degree, cannot participate as regular Team Members.
  • Participants are expected to have basic knowledge of machine learning/deep learning concepts.

Tasks to Execute and Expected Outcomes

  • Detection of vehicles with high average accuracy and low false positives
  • (Extra-1) Innovate new ideas to track vehicle flow from entering junction until exiting it

Datasets (Training, Validation, Testing datasets)

The dataset is composed of >25k (twenty-five thousand) images for training + validation, 2k (two thousand) images for testing. Images varies from day to night, collected at different junctions with different environment and installation conditions.

Dataset is labeled in a standard COCO format. You may parse it the way you like.

Evaluation Criteria (Scores for Tasks, Outcomes and Overall)

  • Detection speed (20 point): Resulting algorithms will be benchmarked on a selected device for best performance.
  • Detection of vehicles accuracy (80 point):
    Assuming the detected vehicle is bounded by exact required size of bounding box as a detection indication. Final evaluation is done by ISSD with a separate dataset by averaging:
    • False positive detection is penalized by (-1point/per image)
    • Failure to detect is penalized by (-2point/per image)
  • This score applies for each image, then averaged over the chosen set:
    Extra-1 (20 point): Estimation of correct path for vehicle entering junction until leaving it

Submission guidelines

  • Teams are required to submit there model inference evaluation via an intermediate representation, will be provided (30 July)
  • Evaluation scripts (mAP, Average Recall, Average inference-time) will be provided (30 July) to teams in order to assess their work iteratively
  • Best 3 teams in the leaderboard will be asked to submit their source code so we can reproduce the model (all copyrights are preserved)
  • Extra-1 will have different qualification than the detection model itself (released on July 30)
  • After model reproduction on our target machines winner will be announced
  • Use of online available model would NOT be accepted and team would be disqualified

Registration Guidelines

  • Get familiar with the problem
  • Get familiar with submission guidelines
  • Register form through webpage icip2020.issd.com.tr

Bibliography

  1. M. Bertozzi, L. Castangia, S. Cattani, A. Prioletti and P. Versari, "360° Detection and tracking algorithm of both pedestrian and vehicle using fisheye images," 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, 2015, pp. 132-137, doi: 10.1109/IVS.2015.7225675.
  2. Honghong Yang, Shiru Qu," Real-time vehicle detection and counting in complex traffic scenes using background subtraction model with low-rank decomposition", Engineering, 2017.
  3. Shokrolah Shirazi, Mohammad & Morris, Brendan," Vision-based vehicle queue analysis at junctions”, 2015, 10.1109/AVSS.2015.7301732.
  4. Wang, W. & Gee, Tim & Price, Jeff & Qi, Hairong. (2015). Real Time Multi-vehicle Tracking and Counting at Intersections from a Fisheye Camera. Proceedings - 2015 IEEE Winter Conference on Applications of Computer Vision, WACV 2015. 17-24. 10.1109/WACV.2015.10.
  5. Shokrolah Shirazi, Mohammad & Morris, Brendan. (2014). Vision-based turning movement counting at intersections by cooperating zone and trajectory comparison modules. 2014 17th IEEE International Conference on Intelligent Transportation Systems, ITSC 2014. 10.1109/ITSC.2014.6958188.
  6. Shokrolah Shirazi, Mohammad & Morris, Brendan. (2016). Looking at Intersections: A Survey of Intersection Monitoring, Behavior and Safety Analysis of Recent Studies. IEEE Transactions on Intelligent Transportation Systems. PP. 1-21. 10.1109/TITS.2016.2568920.

Past IEEE VIP Cup Competitions

IEEE VIP Cup 2019: Activity Recognition from Body Cameras

IEEE ICIP 2019 | September 22-25, 2019 | VIP Cup 2019 Website | Details Document

 

SUPPORTED BY:

IEEE Signal Processing Society (SPS)
Computational Health Informatics Group, Oxford University
IBM Research Africa
Centre for Intelligent Sensing, Queen Mary University of London

INTRODUCTION

The increasing availability of wearable cameras enables the collection of first-person videos (FPV) for the recognition of activities at home, in the workplace and during sport activities. FPV activity recognition has important applications, which include assisted living, activity tracking and life-logging. The main challenges of FPV activity recognition are the presence of outlier motions (for example due to other people captured by the camera), motion blur, illumination changes and self-occlusions.

The 2019 VIP-Cup challenge focuses on FPV from a chest-mounted camera and on the privacy-aware recognition of activities, which include generic activities, such as walking, person-to-person interactions, such as chatting and handshaking, and person-to-object interactions, such as using a computer or a whiteboard. As videos captured by body cameras may leak private or sensitive information about individuals, the evaluation of the IEEE VIP-Cup challenge entries will include privacy enhancing solutions jointly with the recognition performance.

A dataset of activities from several subjects is provided with the annotation for training and validation. The evaluation will be performed based on separate test datasets.

PRIZES

  • The Champion: $5,000
  • The 1st Runner-up: $2,500
  • The 2nd Runner-up: $1,500

TRAVEL SUPPORT

Each finalist team invited to the ICIP 2019 will receive travel support by the IEEE SPS on a reimbursement basis. A team member is offered up to $1,200 for continental travel, or $1,700 for intercontinental travel. A maximum of 3 members per team will be eligible for travel support.

ELIGIBILITY CRITERIA

Each team must be composed of: (i) One faculty member (the Supervisor); (ii) At most one graduate student (the Tutor), and; (iii) At least 3 but no more than 10 undergraduates. At least three of the undergraduate team members must be either IEEE Signal Processing Society (SPS) members or SPS student members. Download full details document.

IMPORTANT DATES

  • April 30, 2019 - Participation Guidelines and Initial Training Dataset released
  • May 5, 2019 - Additional Training Dataset released
  • June 30, 2019 - Submission Deadline
  • July 15, 2019 - Finalists (best three teams) announced
  • July 30, 2019 - Test Dataset 1 released
  • September 22, 2019 - Competition on Test Dataset 2 at ICIP 2019

REGISTER: VIP Cup 2019 Registration Page

ORGANIZING COMMITTEE

  • Girmaw Abebe Tadesse, University of Oxford
  • Oliver Bent, University of Oxford
  • Kommy Woldemariam, IBM Research
  • Andrea Cavallaro, Queen Mary University of London

FINALIST TEAMS

Grand Prize - Team Name: PolyUTS
University: University of Technology Sydney and The Hong Kong Polytechnic University
Supervisor: Sean He | Tutor: Rui Zhao
Students: Hayden Crain, Alex Young, Van Khai Do, Nirosh Rambukkana, Tianqi Wen,
Jichen Zhang, Zihang LYU, Yifei Fan, Chris Lee, Evan Cheng

 

First Runner-Up - Team Name: BUET Ravenclaw
University: Bangladesh University of Engineering and Technology
Supervisor: Mohammad Ariful Haque
Students: Sheikh Asif Imran Shouborno, Md. Tariqul Islam,
K. M. Naimul Hassan, Md. Mushfiqur Rahman

 

Second Runner-Up - Team Name: BUET Synapticans
University: Bangladesh University of Engineering and Technology
Supervisor: Taufiq Hasan | Tutor: Asif Shahriyar Sushmit
Students: Ankan Ghosh Dastider, Nayeeb Rashid, Ridwan Abrar,
Ahsan Habib Akash, Md. Abrar Istiak Akib, Partho Ghosh

Questions should be directed to Dr. Girmaw Abebe Tadesse.

 

Download Call for Participation

 

 

IEEE VIP Cup 2018: Lung Cancer Radiomics-Tumor Region Segmentation

ORGANIZING COMMITTEE

PROPOSED CHALLENGE (Download full document)

The volume, variety, and velocity of medical imaging data is exploding, making it impractical for clinicians to properly utilize the available information resources in an efficient fashion. At the same time, interpretation of such large amount of medical imaging data by humans is significantly error prone reducing the possibility of extracting informative data. The ability to process such large amounts of data promises to decipher the un-decoded information within medical images; Develop predictive and prognosis models to design personalized diagnosis; Allow comprehensive study of tumor phenotype, and; Assess tissue heterogeneity for diagnosis of different type of cancers. Recently, there has been a great surge of interest on Radiomics, which refers to the process of extracting and analyzing several semi-quantitative (e.g., attenuation, shape, size, and location) and quantitative features (e.g., wavelet decomposition, histogram, and gray-level intensity) from medical images with the ultimate goal of obtaining predictive or prognostic models. Radiomics workflow, typically, consists of the following four main processing tasks:

(i) Image acquisition/modality;
(ii) Image segmentation;
(iii) Feature extraction and qualification, and;
(iv) Statistical analysis and model building.

The Radiomics features can be extracted from different imaging modalities including Magnetic Resonance Imaging (MRI); Positron Emission Tomography (PET), and; Computed Tomography (CT), therefore, have the capability of providing complementary information for clinical decision making in clinical oncology.

Recent developments and advancement in Signal Processing and Machine Learning solutions have paved the way for emergence of cancer Radiomics. However, effectiveness and accuracy of Signal Processing and Machine Learning solutions in this field heavily rely on availability of segmented tumor region, i.e., prior knowledge of where the tumor locates. Consequently, among the aforementioned four tasks, Segmentation is considered as the initial and the main critical task to further advance cancer Radiomics. The conventional clinical approach towards segmentation is manual annotation of the tumour region, however, it is extremely time consuming, depends on the personal expertises/oipinion of the clinician, and is extensively sensitive to inter-observer variability. To address these critical issues, automatic (semi-automatic) segmentation methods are currently investigated (e.g. image-level tags or bounding boxes) to minimize manual input, increase consistency in labeling the tumor cancer region, and to obtain accurate and acceptable results in comparison to manually labeled data.

In the 2018 VIP-CUP, we propose a challenge for segmentation of Lung Cancer Tumor region based on a data set consisting of pre-treatment Computed Tomography (CT) scans of several (more than 400) patients. For the initial stage of the competition, a subset of the data along with the annotations will be provided as the training set together with a smaller subset for validation purposes. The evaluation will then be performed based on a test set provided closer to the submission deadline. For segmenting tumors, the competition teams can choose to utilize the conventional image processing techniques or deep learning methods however based on the available shallow datasets. More information...

PARTICIPATION GUIDELINES

Teams satisfying the eligibility criteria outlined below, are invited to participate in the VIP-CUP.  View the detailed competition instructions together with the data sources

Eligibility Criteria: Each team must be composed of: (i) One faculty member (the Supervisor); (ii) At most one graduate student (the Tutor), and; (iii) At least three but no more than ten undergraduates. At least three of the undergraduate team members must be either IEEE Signal Processing Society (SPS) members or SPS student members. Postdocs and research associates are not considered as faculty members. A graduate student is a student having earned at least a 4-year University degree at the time of submission. An undergraduate student is a student without a 4-year degree. Questions about the 2018 VIP-CUP should be directed to Dr. Arash Mohammadi.

IMPORTANT DATES

  • May 24, 2018 - Dataset available
  • July 31, 2018 - Team registration on IEEE VIP Cup
  • August 14, 2018 - Test data released
  • August 26, 2018 - Results Submission
  • September 8, 2018 - Finalist Teams Announced
  • October 7, 2018 - VIP Cup at ICIP in Athens

 

Download Call for Participation

 

IEEE VIP Cup 2017: Traffic Sign Detection Under Challenging Conditions

The IEEE Signal Processing Society announces the first edition of the Signal Processing Society Video and Image Processing (VIP) Cup: traffic sign detection under challenging conditions. Visit the 2017 VIP Cup Website.

 

VIP Cup 2017 image, traffic

 

Robust and reliable traffic sign detection is necessary to bring autonomous vehicles onto our roads. State of the art traffic sign detection algorithms in the literature successfully perform the task over existing databases that mostly lack realistic road conditions. This competition focuses on detecting such traffic signs under challenging conditions.

To facilitate such task and competition, we introduce a novel video dataset that contains a variety of road conditions. In such video sequences, we vary the type and the level of the challenging conditions including a range of lighting conditions, blur, haze, rain and snow levels. The goal of this challenge is to implement traffic sign detection algorithms that can robustly perform under such challenging environmental conditions.

Any eligible team can participate in the competition, whose detailed guidelines and dataset are planned to be released on March 15, 2017 and participating teams should complete their submission by July 1, 2017. The three best teams are selected and announced by August 1, 2017. Three finalist teams will be judged at ICIP 2017, which will be held September 17-20, 2017. In addition to algorithmic performances, demonstration and presentation performances will also affect the final ranking.

The champion team will receive a grand prize of $5,000. The first and the second runner-up will receive a prize of $2,500 and $1,500, respectively, in addition to travel grants and complimentary conference registrations. Each finalist team invited to ICIP 2017 will receive travel grant supported by the SPS on a reimbursement basis. A team member is offered up to $1,200 for continental travel, or $1,700 for intercontinental travel. A maximum of three members per team will be eligible for travel support.

ORGANIZING COMMITTEE

 

 

 

 

SPS on Twitter

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar