Markos Katsoulakis: What is the Math Behind Generative AI?
Abstract
Over the last decade, generative models and generative artificial intelligence (GenAI) have achieved transformative results, from realistic image generation and text synthesis to advancements in scientific fields such as aerospace, astronomy, biology, and medicine. Mathematics is at the core of these breakthroughs, providing the theoretical foundation needed to understand, critique, and advance generative methods.
This talk will cover the foundational mathematical concepts that drive generative AI, including applied probability, statistical inference, optimal transport, and optimization. We will highlight the role of dynamical systems, stochastic differential equations, and control theory in generative models such as Normalizing Flows, Generative Adversarial Networks (GANs), Diffusion Models, and Deep Autoregressive Models. Additionally, we will discuss principles from pure mathematics—such as equivariance and manifolds—that are increasingly important for designing models that respect symmetries and geometric constraints, improving data efficiency and interpretability.
Understanding the mathematics behind GenAI enables us to clarify model behaviors, assess limitations, quantify trustworthiness, and drive the development of more robust and reliable approaches. At the same time, generative AI introduces new challenges in mathematics, modeling, and scientific computing, creating and advancing new connections across the mathematical sciences.