April 10, 2026 11:00 am - 12:00 pm ET
Seminars,
Mathematics of Machine Learning
LGRT 1685

Abstract

Abstract: 
Diffusion models provide powerful priors for Bayesian inverse problems, but many posterior samplers rely on heuristic reverse-time modifications and can struggle with nonlinear operators or multimodal posteriors. In this talk, we study a stabilized path-space approach that formulates posterior sampling as a controlled diffusion, connecting the problem to stochastic control and Schrödinger bridges. We introduce a simple time reparameterization to resolve a key well-posedness obstacle and develop an iterative trust-region method for stable, robust sampling. On a benchmark suite with reliable reference posteriors, the approach achieves state-of-the-art results across inpainting, x-ray tomography, and phase retrieval, while also enabling principled importance-sampling corrections for asymptotically exact posterior expectations.