Skip to main content

Self-Supervised Learning: the Future of Signal Understanding

SHARE:
Category
Proficiency
Language
Media Type
Pricing

SPS Members $0.00
IEEE Members $11.00
Non-members $15.00

Authors
Date
Plenary talk delivered at ICIP 2019 in Taipei.
Duration
1:17:32
Subtitles

Mathematics of Deep Learning

SHARE:
Category
Proficiency
Language
Media Type
Intended Audience
Pricing

SPS Members $0.00
IEEE Members $11.00
Non-members $15.00

Authors
Date
The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning. However, the mathematical reasons for this success remain elusive. For example, a key issue is that the neural network training problem is non-convex, hence optimization algorithms may not return a global minima. In addition, the regularization properties of algorithms such as dropout remain poorly understood. The first part of this talk will overview recent work on the theory of deep learning that aims to understand how to design the network architecture, how to regularize the network weights, and how to guarantee global optimality. The second part of this talk will present sufficient conditions to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Such conditions apply to problems in matrix factorization, tensor factorization and deep learning. The third part of this talk will present an analysis of the optimization and regularization properties of dropout for matrix factorization in the case of matrix factorization. Examples from neuroscience and computer vision will also be presented.
Duration
0:56:16
Subtitles

Sparce Modeling of Data and its Relation to Deep Learning

SHARE:
Category
Proficiency
Language
Media Type
Intended Audience
Pricing

SPS Members $0.00
IEEE Members $11.00
Non-members $15.00

Authors
Date
Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to ... deep-learning architectures. Alongside this main message of bringing a theoretical backbone to deep-learning, another central message that will accompany us throughout the talk is this: Generative models for describing data sources enable a systematic way to design algorithms, while also providing a complete mechanism for a theoretical analysis of these algorithms’ performance. This talk is meant for newcomers to this field – no prior knowledge on sparse approximation is assumed.
Duration
0:54:45
Subtitles

MLR-DEEP