IEEE JSTSP Special Issue on Seeking Low-dimensionality in Deep Neural Networks (SLowDNN)

Manuscript Due: 30 November 2023
Publication Date: October 2024

The resurgence of deep neural networks has led to revolutionary success across almost all areas in engineering and science. Despite recent endeavors, however, our theoretical understandings about deep networks only pertain to idealized and over-simplified network models. The underlying principles behind the success of deep learning still largely remain a mystery, which hinders its further development and adoption to broader applications. On the other hand, it has been long believed that the role of deep networks is to learn certain (nonlinear) low-dimensional representations of the data. This is further evidenced by recently emerged strong connections between deep neural networks and low-dimensional models at multiple levels, in terms of approximation ability, generalization, representations, network architectures, optimization strategies, deep implicit (denoiser) prior and their application for various signal and image processing problems, and so on.
 
The observation of recent surge in relevant research and the emergence of many exciting opportunities motivate us to propose this timely and promising special issue. Recall that to a large extent, the geometric and statistical properties of representative low-dimensional models (such as sparse and low-rank and their variants and extensions) are now well understood. We expect further exploiting low-dimensional models could have significant impacts on the practice as well as mathematical foundations of deep learning, ranging from the design of network architectures, to optimization strategies, and to theoretically understanding the performance of deep neural networks in terms of approximation, representation, generalization, and interpretability for tasks involving high-dimensional and large-scale modern datasets. Thereby, we seek to bring together researchers from academia and industry to introduce to the signal processing and machine learning community the latest advances in deep learning and point to readers many promising research opportunities.
 
Topics of interest include - but are not limited to—connections between low-dimensional models and the theory, architectures, algorithms, and applications of deep neural networks:
  • Theory: approximation, generalization, robustness, representations, interpretability
  • Optimization: benign non-convex optimization, implicit bias analysis, convergence guarantees
  • Architectures: compact/model-based/neuro-inspired/invariant neural networks
  • Algorithms: pruning, sparse training, robust training, isometry learning
  • Applications: deep prior/generative models for signals/images, applications for inverse problems
We also welcome creative papers outside the areas listed here but related to the overall scope of the special issue. Prospective authors can contact the Guest Editors to ascertain interest in topics
that are not listed and should visit the site for information on paper submission. Manuscripts should be submitted using the Manuscript Central system and will be peer-reviewed according to the standard IEEE process.

Important Dates

  • Submission: 30 November 2023
  • First review completed: 31 January 2024
  • Revised manuscript: 31 March 2024
  • Final decision: 31 May 2024
  • Final manuscript: 30 June 2024
  • Publication: October 2024

Guest Editors