1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Submission Deadline: February 19, 2021
Call for Proposals Document
Position description: The research project will focus on developing machine learning/deep learning methods for fundamental computer vision problems including object motion tracking, segmentation, 3D reconstruction, classification and image captioning in 2D/3D images including RGBD images, remote sensing data, 3D CT/MRI medical images and biomedical text.
October 26-27, 2020
Application submission deadline: October 14, 2020
Location: Virtual conference
Submission Deadline: October 12, 2020
Call for Proposals Document
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning (ML) applications. It is used for solving optimization problems similarly to gradient-based methods. However, it does not require the gradient, using only function evaluations. Specifically, ZO optimization iteratively performs three major steps: gradient estimation, descent direction computation, and the solution update. In this article, we provide a comprehensive review of ZO optimization, with an emphasis on showing the underlying intuition, optimization principles, and recent advances in convergence analysis.
Optimization lies at the heart of machine learning (ML) and signal processing (SP). Contemporary approaches based on the stochastic gradient (SG) method are nonadaptive in the sense that their implementation employs prescribed parameter values that need to be tuned for each application. This article summarizes recent research and motivates future work on adaptive stochastic optimization methods, which have the potential to offer significant computational savings when training largescale systems.
Many contemporary applications in signal processing and machine learning give rise to structured nonconvex nonsmooth optimization problems that can often be tackled by simple iterative methods quite effectively. One of the keys to understanding such a phenomenon-and, in fact, a very difficult conundrum even for experts-lies in the study of "stationary points" of the problem in question. Unlike smooth optimization, for which the definition of a stationary point is rather standard, there are myriad definitions of stationarity in nonsmooth optimization.