1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
The Video and Image Processing Cup (VIP Cup) competition encourages teams of students work together to solve real-world problems using video and image processing methods and techniques. Three final teams are chosen to present their work during ICIP to compete for the US$5,000 grand prize!
Technical Committees interested in submitting a call for proposal for upcoming VIP Cup competitions, please visit the Technical Committees page for more information.
[Sponsored by the IEEE Signal Processing Society]
With the increasing growth of urbanization, it introduces traffic jams and congestion in several locations around the city. Apart from accidents, that may result in drastic average travel time increase from point A to point B in a city. Especially junctions are critical since delays and accidents tend to be concentrated at these places. Under these circumstances, intelligent traffic systems are unavoidable that are capable of tasks such as vehicle detection, tracking, violation detection and congestion control.
The 2020 VIP-cup challenge focuses on fisheye cameras mounted into street lamps at junctions and vehicle detection and tracking to be used for a junction management system to optimize the flow of traffic and synchronize with other junctions to obtain bottleneck performances throughout the city. Fisheye cameras are used since they tend to be promising in terms of reliability and scene coverage at a chosen junction. They provide 360 degrees of observation view, thus introducing key changes in traffic management.
Although fish eye cameras have a key role in junction management systems, accompanying challenges come with them as well, such as : High distortion ratios, Different scales of same target object moving in different parts of the image, Day/night views variance (night view suffers from low quality related to surrounding lightning conditions), Exposure introduced with vehicle lights (night view). A dataset of traffic videos from several junctions at different times during the day/night is provided with the annotation for training and validation (icip2020.issd.com.tr) . The evaluation will be performed based on separate test datasets.
Each team must be composed of:
The dataset is composed of >25k (twenty-five thousand) images for training + validation, 2k (two thousand) images for testing. Images varies from day to night, collected at different junctions with different environment and installation conditions.
Dataset is labeled in a standard COCO format. You may parse it the way you like.
IEEE Signal Processing Society (SPS)
Computational Health Informatics Group, Oxford University
IBM Research Africa
Centre for Intelligent Sensing, Queen Mary University of London
The increasing availability of wearable cameras enables the collection of first-person videos (FPV) for the recognition of activities at home, in the workplace and during sport activities. FPV activity recognition has important applications, which include assisted living, activity tracking and life-logging. The main challenges of FPV activity recognition are the presence of outlier motions (for example due to other people captured by the camera), motion blur, illumination changes and self-occlusions.
The 2019 VIP-Cup challenge focuses on FPV from a chest-mounted camera and on the privacy-aware recognition of activities, which include generic activities, such as walking, person-to-person interactions, such as chatting and handshaking, and person-to-object interactions, such as using a computer or a whiteboard. As videos captured by body cameras may leak private or sensitive information about individuals, the evaluation of the IEEE VIP-Cup challenge entries will include privacy enhancing solutions jointly with the recognition performance.
A dataset of activities from several subjects is provided with the annotation for training and validation. The evaluation will be performed based on separate test datasets.
Each finalist team invited to the ICIP 2019 will receive travel support by the IEEE SPS on a reimbursement basis. A team member is offered up to $1,200 for continental travel, or $1,700 for intercontinental travel. A maximum of 3 members per team will be eligible for travel support.
Each team must be composed of: (i) One faculty member (the Supervisor); (ii) At most one graduate student (the Tutor), and; (iii) At least 3 but no more than 10 undergraduates. At least three of the undergraduate team members must be either IEEE Signal Processing Society (SPS) members or SPS student members. Download full details document.
REGISTER: VIP Cup 2019 Registration Page
Grand Prize - Team Name: PolyUTS
University: University of Technology Sydney and The Hong Kong Polytechnic University
Supervisor: Sean He | Tutor: Rui Zhao
Students: Hayden Crain, Alex Young, Van Khai Do, Nirosh Rambukkana, Tianqi Wen,
Jichen Zhang, Zihang LYU, Yifei Fan, Chris Lee, Evan Cheng
First Runner-Up - Team Name: BUET Ravenclaw
University: Bangladesh University of Engineering and Technology
Supervisor: Mohammad Ariful Haque
Students: Sheikh Asif Imran Shouborno, Md. Tariqul Islam,
K. M. Naimul Hassan, Md. Mushfiqur Rahman
Second Runner-Up - Team Name: BUET Synapticans
University: Bangladesh University of Engineering and Technology
Supervisor: Taufiq Hasan | Tutor: Asif Shahriyar Sushmit
Students: Ankan Ghosh Dastider, Nayeeb Rashid, Ridwan Abrar,
Ahsan Habib Akash, Md. Abrar Istiak Akib, Partho Ghosh
Questions should be directed to Dr. Girmaw Abebe Tadesse.
PROPOSED CHALLENGE (Download full document)
The volume, variety, and velocity of medical imaging data is exploding, making it impractical for clinicians to properly utilize the available information resources in an efficient fashion. At the same time, interpretation of such large amount of medical imaging data by humans is significantly error prone reducing the possibility of extracting informative data. The ability to process such large amounts of data promises to decipher the un-decoded information within medical images; Develop predictive and prognosis models to design personalized diagnosis; Allow comprehensive study of tumor phenotype, and; Assess tissue heterogeneity for diagnosis of different type of cancers. Recently, there has been a great surge of interest on Radiomics, which refers to the process of extracting and analyzing several semi-quantitative (e.g., attenuation, shape, size, and location) and quantitative features (e.g., wavelet decomposition, histogram, and gray-level intensity) from medical images with the ultimate goal of obtaining predictive or prognostic models. Radiomics workflow, typically, consists of the following four main processing tasks:
(i) Image acquisition/modality;
(ii) Image segmentation;
(iii) Feature extraction and qualification, and;
(iv) Statistical analysis and model building.
The Radiomics features can be extracted from different imaging modalities including Magnetic Resonance Imaging (MRI); Positron Emission Tomography (PET), and; Computed Tomography (CT), therefore, have the capability of providing complementary information for clinical decision making in clinical oncology.
Recent developments and advancement in Signal Processing and Machine Learning solutions have paved the way for emergence of cancer Radiomics. However, effectiveness and accuracy of Signal Processing and Machine Learning solutions in this field heavily rely on availability of segmented tumor region, i.e., prior knowledge of where the tumor locates. Consequently, among the aforementioned four tasks, Segmentation is considered as the initial and the main critical task to further advance cancer Radiomics. The conventional clinical approach towards segmentation is manual annotation of the tumour region, however, it is extremely time consuming, depends on the personal expertises/oipinion of the clinician, and is extensively sensitive to inter-observer variability. To address these critical issues, automatic (semi-automatic) segmentation methods are currently investigated (e.g. image-level tags or bounding boxes) to minimize manual input, increase consistency in labeling the tumor cancer region, and to obtain accurate and acceptable results in comparison to manually labeled data.
In the 2018 VIP-CUP, we propose a challenge for segmentation of Lung Cancer Tumor region based on a data set consisting of pre-treatment Computed Tomography (CT) scans of several (more than 400) patients. For the initial stage of the competition, a subset of the data along with the annotations will be provided as the training set together with a smaller subset for validation purposes. The evaluation will then be performed based on a test set provided closer to the submission deadline. For segmenting tumors, the competition teams can choose to utilize the conventional image processing techniques or deep learning methods however based on the available shallow datasets.
Teams satisfying the eligibility criteria outlined below, are invited to participate in the VIP-CUP. View the detailed competition instructions together with the data sources.
Eligibility Criteria: Each team must be composed of: (i) One faculty member (the Supervisor); (ii) At most one graduate student (the Tutor), and; (iii) At least three but no more than ten undergraduates. At least three of the undergraduate team members must be either IEEE Signal Processing Society (SPS) members or SPS student members. Postdocs and research associates are not considered as faculty members. A graduate student is a student having earned at least a 4-year University degree at the time of submission. An undergraduate student is a student without a 4-year degree. Questions about the 2018 VIP-CUP should be directed to Dr. Arash Mohammadi.
IEEE VIP Cup 2017: Traffic Sign Detection Under Challenging Conditions
The IEEE Signal Processing Society announces the first edition of the Signal Processing Society Video and Image Processing (VIP) Cup: traffic sign detection under challenging conditions. Visit the 2017 VIP Cup Website.
Robust and reliable traffic sign detection is necessary to bring autonomous vehicles onto our roads. State of the art traffic sign detection algorithms in the literature successfully perform the task over existing databases that mostly lack realistic road conditions. This competition focuses on detecting such traffic signs under challenging conditions.
To facilitate such task and competition, we introduce a novel video dataset that contains a variety of road conditions. In such video sequences, we vary the type and the level of the challenging conditions including a range of lighting conditions, blur, haze, rain and snow levels. The goal of this challenge is to implement traffic sign detection algorithms that can robustly perform under such challenging environmental conditions.
Any eligible team can participate in the competition, whose detailed guidelines and dataset are planned to be released on March 15, 2017 and participating teams should complete their submission by July 1, 2017. The three best teams are selected and announced by August 1, 2017. Three finalist teams will be judged at ICIP 2017, which will be held September 17-20, 2017. In addition to algorithmic performances, demonstration and presentation performances will also affect the final ranking.
The champion team will receive a grand prize of $5,000. The first and the second runner-up will receive a prize of $2,500 and $1,500, respectively, in addition to travel grants and complimentary conference registrations. Each finalist team invited to ICIP 2017 will receive travel grant supported by the SPS on a reimbursement basis. A team member is offered up to $1,200 for continental travel, or $1,700 for intercontinental travel. A maximum of three members per team will be eligible for travel support.