Jiequn Han: Deep Picard Iteration: Solving High-Dimensional PDEs via Regression-Based Learning
Abstract: Deep learning has opened new possibilities for solving high-dimensional partial differential equations (PDEs), a longstanding challenge due to the notorious "curse of dimensionality." Early methods, such as the Deep BSDE (backward stochastic differential equation) method, demonstrated how stochastic optimization ideas from deep learning could be combined with stochastic representations of PDEs to address this difficulty.
Extending these ideas, this talk will introduce the recently developed Deep Picard Iteration (DPI) method (arXiv:2409.08526), which reformulates the typically complex training objectives of neural network-based PDE solvers into standard regression tasks involving function values and gradients. To fully leverage this design, DPI incorporates a control variate to address the infinite variance problem in gradient estimators with theoretical guarantees. Numerical experiments on various problems show that DPI consistently outperforms existing state-of-the-art methods, with greater robustness to hyperparameters, particularly in challenging regimes with long time horizons and strong nonlinearity.
Bio: Jiequn Han is a Research Scientist at the Center for Computational Mathematics, Flatiron Institute, Simons Foundation. He conducts research on machine learning for science, drawing broadly from the methodologies and challenges of various scientific disciplines, with a focus on solving high-dimensional problems in scientific computing, primarily those related to PDEs. He holds a Ph.D. in Applied Mathematics from Princeton University and dual bachelor's degrees in Computational Mathematics and Economics from Peking University. His research has been recognized with the SIAM Computational Science and Engineering (CSE) Early Career Prize.