You are here
Barron spaces
Barron spaces
Barron spaces are natural function spaces for the approximation theory of shallow neural networks in high dimensions, and thus they play an important role in the modern analysis of machine learning. I will introduce preliminary definitions on neural networks and motivate the study of Barron spaces before I give the formal definition. I will then discuss their strengths and limitations, some brief applications, and possible open questions. If time permits, I will also introduce analogues of Barron spaces for deep networks — the multi-layer function spaces — whose general theory is still under development.
Some references for the talk are:
Weinan E, Chao Ma, and Lei Wu, The Barron Space and the Flow-Induced Function spaces for Neural Network Models, https://arxiv.org/abs/1906.08039 (this is the main reference for the talk)
Weinan E, Stephan Wojtowytsch, On the Banach spaces associated with multi-layer ReLU networks: function representation, approximation, and gradient descent dynamics https://arxiv.org/abs/2007.15623 (reference for the multi-layer spaces)
Weinan E, Stephan Wojtowytsch, Some observations on high-dimensional partial differential equations with Barron data, https://arxiv.org/abs/2012.01484 (for applications to PDE learning, I will only discuss maybe one example from here)
Andrew Barron, Universal Approximation Bounds for Superpositions of a Sigmoidal Function, http://www.stat.yale.edu/~arb4/publications_files/UniversalApproximation... (this is Barron's original paper, I'm not going to follow it for the talk but people might be interested in it)
Department of Mathematics and Statistics