Mathematics of Transformers
Friday 26 September 2025 -
08:30
Monday 22 September 2025
Tuesday 23 September 2025
Wednesday 24 September 2025
Thursday 25 September 2025
Friday 26 September 2025
08:30
Reception & Coffee
Reception & Coffee
08:30 - 09:00
Room: Building 1b, Seminar Room 4ab
09:00
Welcome & Introduction
Welcome & Introduction
09:00 - 09:15
Room: Building 1b, Seminar Room 4ab
09:15
Dynamic metastability in self-attention dynamics
-
Borjan Geshkovski
Dynamic metastability in self-attention dynamics
Borjan Geshkovski
09:15 - 10:00
Room: Building 1b, Seminar Room 4ab
10:00
A multiscale analysis of mean-field transformers in the moderate interaction regime
-
Giuseppe Bruno
A multiscale analysis of mean-field transformers in the moderate interaction regime
Giuseppe Bruno
10:00 - 10:45
Room: Building 1b, Seminar Room 4ab
In this talk, we study the evolution of tokens across the depth of encoder-only transformer models at inference time, modeling them as a system of interacting particles in the infinite-depth limit. Motivated by techniques for extending the context length of large language models, we focus on the moderate interaction regime, where the number of tokens is large and the inverse temperature parameter scales accordingly. In this setting, the dynamics exhibit a multiscale structure. Using PDE analysis, we identify different phases depending on the choice of parameters.
10:45
Coffee Break
Coffee Break
10:45 - 11:15
Room: Building 1b, Seminar Room 4ab
11:15
Mean-Field Transformer Dynamics with Gaussian Inputs
-
Valérie Castin
Mean-Field Transformer Dynamics with Gaussian Inputs
Valérie Castin
11:15 - 12:00
Room: Building 1b, Seminar Room 4ab
Transformers, that underlie the recent successes of large language models, represent the data as sequences of vectors called tokens. This representation is leveraged by the attention function, which learns dependencies between tokens and is key to the success of Transformers. However, the dynamics induced by the iterative application of attention across layers remain to be fully understood. To analyze these dynamics, we identify each input sequence with a probability measure, thus handling input sequences of arbitrary length, and model its evolution as a Vlasov equation called Transformer PDE, whose velocity field is non-linear in the probability measure. For compactly supported initial data and several self-attention variants, we show the Transformer PDE is well-posed and is the mean-field limit of an interacting particle system. We also study the case of Gaussian initial data, which has the nice property of staying Gaussian across the dynamics. This allows us to identify typical behaviors theoretically and numerically, and to highlight a clustering phenomenon that parallels previous results in the discrete case.
12:00
Discussion
Discussion
12:00 - 12:30
Room: Building 1b, Seminar Room 4ab
12:30
Lunch Break
Lunch Break
12:30 - 14:00
Room: Canteen
14:00
Transformers: From Dynamical Systems to Autoregressive In-Context Learners
-
Michaël Sander
Transformers: From Dynamical Systems to Autoregressive In-Context Learners
Michaël Sander
14:00 - 14:45
Room: Building 1b, Seminar Room 4ab
Transformers have enabled machine learning to reach capabilities that were unimaginable just a few years ago. Despite these advances, a deeper understanding of the key mechanisms behind their success is needed to build the next generation of AI systems. In this talk, we will begin by presenting a dynamical system perspective on Transformers, demonstrating that they can be interpreted as interacting particle flow maps on the space of probability measures, solving an optimization problem over a context-dependent inner objective. We will also discuss the impact of attention map normalization on Transformer behavior in this framework. We will then focus on the causal setting and propose a model to understand the mechanism behind next-token prediction in a simple autoregressive in-context learning task. We will explicitly construct a Transformer that learns to solve this task in-context through a causal kernel descent method, with connections to the Kaczmarz algorithm in Hilbert spaces, and discuss connections with inference-time scaling. References Sander, M. E., & Peyré, G. (2025). Towards understanding the universality of transformers for next-token prediction. International Conference on Learning Representations (ICLR). Sander, M. E., Giryes, R., Suzuki, T., Blondel, M., & Peyré, G. (2024). How do transformers perform in-context autoregressive learning? International Conference on Machine Learning (ICML). Sander, M. E., Ablin, P., Blondel, M., & Peyré, G. (2022). Sinkformers: Transformers with doubly stochastic attention. International Conference on Artificial Intelligence and Statistics (AISTATS).
14:45
Transformers as token-to-token function learners
-
Subhabrata Dutta
Transformers as token-to-token function learners
Subhabrata Dutta
14:45 - 15:30
Room: Building 1b, Seminar Room 4ab
The ambitious question of understanding a Transformer can be decomposed into understanding the functions it implements: the class of functions one can theoretically approximate using a Transformer, which subclass of them is learnable via gradient descent, which training data distribution is implicitly biased towards which set of functions, how they are implemented across the neural components of the model, and so on. In this talk, I will focus on Transformers implementing language functions. A primer on mechanistic interpretability will be given, followed by certain open problems in this area. Then, I will present an alternate view of the Transformer functions that can potentially solve many of the existing limitations: existence of multiple parallel computation paths, lack of robustness of autoencoder-based replacement models, and how to formalize causal models embedded in training.
15:30
World Café (discussion format)
World Café (discussion format)
15:30 - 18:00
Room: Building 1b, Seminar Room 4ab