NEURIPS@PARIS 2025: Two Days of Cutting-Edge Research and Dialogue in Machine Learning

Date
27 Nov 2025
Location
Paris
Open link

NEURIPS@PARIS 2025

Two Days of Cutting-Edge Research and Dialogue in Machine Learning

NEURIPS@PARIS 2025 brought together an exceptional community of researchers, engineers, and industry innovators for two days of high-level scientific exchange. This recap highlights the sessions, themes, and insights that shaped this dense and inspiring event.

Across two days, NEURIPS@PARIS offered a rich overview of today’s machine learning landscape: foundational theory, algorithmic advances, deep learning, probabilistic methods, and impactful applications in health, imaging, modeling, and scientific computing. The atmosphere was both rigorous and vibrant — animated poster sessions, technical debates, and valuable interactions between academia and industry.

Day 1 — Theory, Algorithms, and Deep Learning

Morning Session

Theory of Machine Learning 
The event opened with a session dedicated to fundamental advances in learning theory, covering adaptation, generalization, and nonparametric frameworks:

  • Paul Liautaud — Minimax Adaptive Online Nonparametric Regression over Besov Spaces
  • Lucas Morisset — Non-Asymptotic Analysis of Data Augmentation for Precision Matrix Estimation
  • Dario Shariatian — Algorithm- and Data-Dependent Generalization Bounds for Diffusion Models
  • Fares El Khoury — Learning Theory for Kernel Bilevel Optimization

These talks shed light on online minimax rates, non-asymptotic guarantees for augmentation methods, generalization behaviors in diffusion models, and theoretical insights into bilevel optimization.

ML Methods and Algorithms 
The second morning session focused on algorithmic innovations and statistical robustness:

  • Julie Mordacq — T-REGS: Minimum Spanning Tree Regularization for Self-Supervised Learning
  • Adarsh Jamadandi — Memorization in Graph Neural Networks
  • Anna Van Elst — Robust Distributed Estimation: Extending Gossip Algorithms to Ranking and Trimmed Means
  • Marc Lambert — Variational Inference with Mixture of Isotropic Gaussian

Highlights included new regularization strategies for self-supervision, memorization phenomena in GNNs, robust distributed estimation techniques, and advances in variational inference.

Afternoon Session

Deep Learning 
The afternoon explored deep learning, particularly generative models, vision, and generalization:

  • Raphaël Urfin — Why Diffusion Models Don’t Memorize: Implicit Dynamical Regularization in Training
  • Theodoros Kouzelis — Boosting Generative Image Modeling via Joint Image-Feature Synthesis
  • Thibaut Loiseau — Alligat0R: Pre-Training through Covisibility Segmentation for Camera Pose Regression
  • Anne Gagneux — On the Closed-Form of Flow Matching: Generalization Does Not Arise from Target Stochasticity

Discussions touched on implicit regularization in diffusion models, joint generative pipelines, covisibility-based pretraining, and theoretical results in flow matching.

 

Day 2 — Probabilistic Methods, Algorithmic Advances, and Applied ML

Morning Session

Probabilistic Methods 
Day 2 opened with strong contributions at the intersection of probability, divergences, and stochastic optimization:

  • Ferdinand Genans — Stochastic Optimization in Semi-Discrete Optimal Transport: Convergence Analysis and Minimax Rate
  • Reuben Dorent — Connecting Jensen–Shannon and Kullback–Leibler Divergences: A New Bound for Representation Learning
  • Elen, Vahan Vardanyan, Arsenyan — Assessing the Quality of Denoising Diffusion Models in Wasserstein Distance
  • Antonio Ocello — Wasserstein Convergence of Critically Damped Langevin Diffusions

These talks addressed optimal transport optimization, novel divergence bounds, quantitative evaluations of diffusion models, and convergence analyses of Langevin dynamics.

ML Methods and Algorithms 2 
The second technical session of the day explored symmetry, generative potentials, long-range dependencies, and evaluation metrics:

  • Nina Vesseron — Sample and Map from a Single Convex Potential using Conjugate Generative Moment Measures
  • Luiz Chamon — Learning (Approximately) Equivariant Networks via Constrained Optimization
  • Maxence Ernoult — Learning Long-Range Dependencies through Time Reversal Symmetry Breaking
  • Félix Chavelli — Interpretable Evaluation Measures for Time Series Segmentation

Here, speakers presented new tools for sampling from convex potentials, enforcing equivariance, modeling long-range temporal structure, and designing interpretable segmentation metrics.

Afternoon Session

Applications in ML & Statistics 
The final session showcased meaningful applications across medicine, neuroscience, PDE modeling, and molecular geometry:

  • Pierre Marza — THUNDER: A Benchmark for Tile-Level Histopathology Image Understanding
  • Steeve Laquitaine — Decomposing Sensory Neural Information Using Diffusion Models
  • Lise Le Boudec — ENMA: Tokenwise Autoregression for Continuous Neural PDE Operators
  • Noémie Bergues — Template-Guided 3D Molecular Pose Generation via Flow Matching and Differentiable Optimization

These contributions illustrated the expanding role of ML in scientific discovery, from medical imaging to molecular generation.

Acknowledgments

A warm thank you to all speakers, participants, organizers, and sponsors who made NEURIPS@PARIS possible. Your dedication and engagement created an inspiring and collaborative environment.

Photo Gallery & Resources