Skip to main content

Rene Vidal

Mathematics of Deep Learning

SHARE:
Category
Proficiency
Language
Media Type
Intended Audience
Pricing

SPS Members $0.00
IEEE Members $11.00
Non-members $15.00

Authors
Date
Thanks to the introduction of deep networks for representation learning, the past few years have seen a dramatic increase in the performance of recognition systems. However, the mathematical reasons for this success remain elusive. For example, a key issue is that the neural network training problem is nonconvex, hence, optimization algorithms may not return a global minima. In addition, the regularization properties of algorithms such as dropout remain poorly understood. The first part of this talk will overview recent work on the theory of deep learning that aims to understand 1) how to design the network architecture, 2) how to regularize the network weights, and 3) how to guarantee global optimality. The second part of this talk will present sufficient conditions to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Such conditions apply to problems in matrix factorization, tensor factorization, and deep learning. The third part of this talk will present an analysis of the optimization and regularization properties of dropout for matrix factorization in the case of matrix factorization. Examples from neuroscience and computer vision will also be presented.
Duration
0:56:16
Subtitles

IEEE SPS Education Center FAQs

The IEEE SPS Education Center is your hub for educational resources in signal processing. It offers a variety of materials tailored for students and professionals alike. You can explore content based on your specific interests and skill levels.

Select the program and click on the external link to the IEEE SPS Resource Center.

Educational credits in the form of professional development hours (PDHs) or continuing education units (CEUs) are available on select educational programs.