Skip to main content
20
Sep
10:00 am - 11:00 am ET
Mathematics of Machine Learning
Training on the Edge of Stability is Caused by Layerwise Jacobian Alignment

During neural network training, the sharpness of the Hessian matrix of the training loss rises until training is on the edge of stability. As a result, even non-stochastic gradient descent does not accurately model the underlying dynamical system defined by the gradient flow of the training loss. We treat neural network training as a system of stiff ordinary differential equations and use an exponential Euler solver to train the network without entering the edge of stability. We demonstrate experimentally that the increase in the sharpness of the Hessian matrix is caused by the layerwise Jacobian matrices of the network becoming aligned, so that a small change in the preactivations at the front of the network can cause a large change in the outputs at the back of the network. We further demonstrate that the degree of layerwise Jacobian alignment scales with the size of the dataset by a power law with a coefficient of determination between 0.74 and 0.97.

26
Sep
4:00 pm - 4:00 pm ET
Reading Seminar on Automorphism Groups of Manifolds
Dehn twists

We will define Dehn twists in all dimensions, discuss their properties, and explain how to calculate their orders in topological and smooth mapping class groups.

Filters

Skip filters