1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Encoding-decoding convolutional neural networks (CNNs) play a central role in data-driven noise reduction and can be found within numerous deep learning algorithms. However, the development of these CNN architectures is often done in an ad hoc fashion and theoretical underpinnings for important design choices are generally lacking. Up to now, there have been different existing relevant works that have striven to explain the internal operation of these CNNs. Still, these ideas are either scattered and/or may require significant expertise to be accessible for a bigger audience. To open up this exciting field, this article builds intuition on the theory of deep convolutional framelets (TDCFs) and explains diverse encoding-decoding (ED) CNN architectures in a unified theoretical framework. By connecting basic principles from signal processing to the field of deep learning, this self-contained material offers significant guidance for designing robust and efficient novel CNN architectures.
A well-known image processing application is noise/artifact reduction of images, which consists of estimating a noise/artifact-free signal out of a noisy observation. To achieve this, conventional signal processing algorithms often employ explicit assumptions on the signal and noise characteristics, which has resulted in well-known algorithms such as wavelet shrinkage , sparse dictionaries , total-variation minimization , and low-rank approximation . With the advent of deep learning techniques, signal processing algorithms applied to image denoising, have been regularly outperformed and increasingly replaced by encoding-decoding CNNs.
In this article, rather than conventional signal processing algorithms, we focus on so-called encoding-decoding CNNs. These models contain an encoder that maps the input to multichannel/redundant representations and a decoder, which maps the encoded signal back to the original domain. In both, the encoder and decoder, sparsifying nonlinearities, which suppress parts of the signal, are applied. In contrast to conventional signal processing algorithms, encoding-decoding CNNs are often presented as a solution, which does not make explicit assumptions on the signal and noise. For example, in supervised algorithms, an encoding-decoding CNN learns the optimal parameters to filter the signal from a set of paired examples of noise/artifact-free images and images contaminated with noise/artifacts , , , which highly simplifies the solution of the noise-reduction problems as this circumvents use of explicit modeling of the signal and noise. Furthermore, the good performance and simple use of encoder-decoder CNNs have enabled additional data-driven noise-reduction algorithms, where CNNs are embedded as part of a larger system. Examples of such approaches are unsupervised noise reduction  and denoising based on generative adversarial networks . Besides this, smoothness in signals can also be obtained by advanced regularization using CNNs, e.g., by exploiting data-driven model-based iterative reconstruction .