## You are here

Regularized Mean Field Optimization with Application to Neural Networks

Regularized Mean Field Optimization with Application to Neural Networks

Our recent works on the regularized mean field optimization aim at providing a theoretical foundation for analyzing the efficiency of the training of neural networks, as well as inspiring new training algorithms. In this talk we shall see how different regularizers, such as relative entropy and Fisher information, lead to different gradient flows on the space of probability measures. Besides the gradient flows, we also propose and study alternative algorithms, such as the entropic fictitious play, to search for the optimal weights of neural networks. Each of these algorithms is ensured to have exponential convergence, and we shall highlight their performances in some simple numerical tests.

## Department of Mathematics and Statistics