The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
News and Resources for Members of the IEEE Signal Processing Society
Speaker | Date | Affiliation |
---|---|---|
Raja Giryes | 19 May | Tel Aviv University |
Laura Waller | 2 June | UC Berkeley |
Michael Unser | 16 June | EPFL |
Katherine L. (Katie) Bouman | 30 June | Caltech |
Jong Chul Ye | 14 July | KAIST, Korea |
Orazio Gallo | 28 July | Nvidia |
Xiao Xiang Zhu | 11 August | Technische Universität München |
Saiprasad Ravishankar | 25 August | Michigan State University |
Anat Levin | 8 September | Technion, Israel |
Pier Luigi Dragotti | 22 September | Imperial College, UK |
John Wright | 6 October | Columbia University |
Bihan Wen | 20 October | Nanyang Technological NTU, Singapore |
Nicole Seiberlich | 3 November | University of Michigan |
Yoram Bresler | 17 November | UIUC |
Singanallur Venkatakrishnan | 1 December | Oak Ridge National Laboratory |
Jay Webster Stayman | 15 December | Johns Hopkins University |
Upcoming Webinars @ SPACE
14 July 2020: Dr. Jong Chul Ye
28 July 2020: Dr. Orazio Gallo
Presenters:
Date: |
Dr. Jong Chul Ye, KAIST, Korea (14 July 2020) Dr. Orazio Gallo, Nvidia (28 July 2020) 14 July 2020 and 28 July 2020 11:00 am EDT (New York time) Approximately 1 hour Webinar Registration |
The IEEE Signal Processing Society would like to express our concern and support for the members of our global community and all affected by the current COVID-19 pandemic. We appreciate your continued patience and support as we work together to navigate these unforeseen and uncertain circumstances. We hope that you, your families, and your communities are safe!
Speaker: Jong Chul Ye, KAIST, Korea
Title: Optimal transport driven CycleGAN for unsupervised learning in inverse problems
Abstract: The penalized least squares (PLS) is a classic method to solve inverse problems, where a regularization term is added to stabilize the solution. Optimal transport (OT) is another mathematical framework that has recently received significant attention by computer vision community, for it provides means to transport one distribution to another in an unsupervised manner. The cycle-consistent generative adversarial network (cycleGAN) is a recent extension of GAN to learn target distributions with less mode collapsing behavior. Although similar in that no supervised training is required, the algorithms look different, so the mathematical relationship between these approaches is not clear.
In this talk, I explain an important advance to unveil the missing link. Specifically, we propose a novel PLS cost to measure the sum of distances in the measurement space and the latent space. When used as a transportation cost for optimal transport, we show that this new PLS cost leads to a novel cycleGAN architecture as a Kantorovich dual OT formulation. One of the most important advantages of this formulation is that depending on the knowledge of the forward problem, distinct variations of cycleGAN architecture can be derived. The new cycleGAN formulation have been applied for various imaging problems, such as accelerated magnetic resonance imaging (MRI), super-resolution/deconvolution microscopy, low-dose x-ray computed tomography (CT), satellite imagery, etc. Experimental results confirm the efficacy and flexibility of the theory.
Jong Chul Ye is a Professor at the Dept. of Bio/Brain Engineering and Adjunct Professor at the Dept. of Mathematical Sciences of Korea Advanced Institute of Science and Technology (KAIST), Korea. He received the B.Sc. and M.Sc. degrees from Seoul National University, Korea, and the Ph.D. degree from Purdue University, West Lafayette, IN. Before joining KAIST, he was a Senior Researcher at Philips Research, GE Global Research in New York, and a postdoctoral fellow at University of Illinois at Urbana Champaign. He has served as an associate editor of IEEE Trans. on Image Processing, IEEE Trans. on Computational Imaging, and an editorial board member for Magnetic Resonance in Medicine.
He is currently an associate editor for IEEE Trans. on Medical Imaging, and a Senior Editor of IEEE Signal Processing Magazine. He is an IEEE Fellow, Chair of IEEE SPS Computational Imaging TC, and IEEE EMBS Distinguished Lecturer. He was a General Co-chair for 2020 IEEE Symp. On Biomedical Imaging (ISBI) (with Mathews Jacob), and will be a Program Co-Chair for 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, Seoul. His group was the first winner of the 2009 Recon Challenge at the ISMRM Workshop with k-t FOCUSS algorithm, and the runner-up at 2016 Low Dose CT Grand Challenge organized by the American Association of Physicists in Medicine (AAPM) with the world’s first deep learning algorithm for low-dose CT reconstruction. His current research interests focus is deep learning theory and algorithms for various imaging reconstruction problems in x-ray CT, MRI, optics, ultrasound, remote sensing, etc.
Speaker: Orazio Gallo, Nvidia
Title: Depth Estimation from RGB Images with Applications to Novel View Synthesis and Autonomous Navigation
Abstract:
Depth information is a central requirement for many computer vision and computational imaging applications. A number of sensors exist that can capture depth directly. Standard RGB cameras offer a particularly attractive alternative thanks to their lower price point and widespread availability, but they also introduce new challenges. In this talk I will address two of their main challenges.
The first challenge is dynamic content. When a scene is captured with a monocular camera, moving objects break the epipolar constraints, thus making it impossible to directly estimate depth. I will describe a method to address this issue while also improving the quality of the depth estimation in the static regions of the scene. I will then use the resulting depth to synthesize novel views of the scene or to create effects like the bullet-time effect, but without the need for synchronized cameras.
The issue of dynamic content can also be addressed by using multiple cameras simultaneously, as is the case of stereo. However, while state-of-the-art, deep-learning stereo algorithms produce high-quality depth, they are far from real time--a central requirement for applications such as autonomous navigation. I will present Bi3D, a stereo algorithm that tackles this second challenge. Bi3D allows to trade depth quantization for latency. Given a strict time budget, Bi3D can detect objects closer than a given distance D in as little as 5 ms. It can also estimate depth with arbitrarily coarse quantization and complexity linear with the number of quantization levels. For instance, it takes 9.8ms to estimate a 2-bit depthmap or 18.5 ms for a 3-bit depthmap. Bi3D can also use the allotted quantization levels to get regular, continuous depth, but in a specific depth range.
Orazio Gallo is a Principal Research Scientist at NVIDIA Research. He is interested in computational imaging, computer vision, deep learning and, in particular, in the intersection of the three. Alongside topics such as view synthesis and 3D vision, his recent interests also include integrating traditional computer vision and computational imaging knowledge into deep learning architectures. Previously, Orazio research focus revolved around tinkering with the way pictures are captured, processed, and consumed by the photographer or the viewer.
Dr. Gallo is an associate editor of the IEEE Transactions of Computational Imaging and was an associate editor of Signal Processing: Image Communication from 2015 to 2017. Since 2015, he has also been a member of the IEEE Computational Imaging Technical Committee.
Nomination/Position | Deadline |
---|---|
Call for Nominations: IEEE Technical Field Awards | 15 January 2025 |
Nominate an IEEE Fellow Today! | 7 February 2025 |
Nominate an IEEE Fellow Today! | 7 February 2025 |
Call for Nominations for IEEE SPS Editors-in-Chief | 10 February 2025 |
Call for Nominations for IEEE SPS Editors-in-Chief | 10 February 2025 |
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.