Omar Chehab

I am a postdoctoral researcher in the Statistics Department of ENSAE Paris and CREST, working with Anna Korba.

I received my PhD in Mathematical Computer Science at Inria, where I was advised by Aapo Hyvärinen and Alexandre Gramfort. I also hold Master's in Applied Mathematics and Engineering from ENSTA Paris and in Mathematics, Vision and Learning (MVA) from ENS Paris-Saclay.

Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo

Research

My research is in machine learning, particularly on efficient algorithms for estimating and sampling from probabilistic models, as well as on learning useful representations of brain activity.

Sampling from Energy-Based Probabilistic Models

project image

Polynomial time sampling from log-smooth distributions in fixed dimension under semi-log-concavity of the forward diffusion with application to strongly dissipative distributions


Adrien Vacher, Omar Chehab, Anna Korba
arXiv, 2025
arxiv /

Time-reversed diffusions are state-of-the-art in sampling but rely on score estimates. We analyze how estimation errors affect the final samples.

project image

Provable Convergence and Limitations of Geometric Tempering for Langevin Dynamics


Omar Chehab, Anna Korba, Austin Stromme, Adrien Vacher
arXiv, 2024
arxiv / poster /

Annealed MCMC tries to approximate a prescribed path of distributions. We show that the popular geometric mean path with a Gaussian has unfavorable geometry. Presented at the Yale workshop on sampling.

project image

A Practical Diffusion Path for Sampling


Omar Chehab, Anna Korba
SPIGM Workshop, International Conference on Machine Learning (ICML), 2024
arxiv / poster /

Time-reversed diffusions are state-of-the-art in sampling but rely on score estimates. We aim to reduce their variance.

Estimating Energy-Based Probabilistic Models

project image

Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond


Omar Chehab, Aapo Hyvärinen, Andrej Risteski
Spotlight, Advances in Neural Information Processing Systems (NeurIPS), 2023
arxiv / code / poster /

Annealed Importance Sampling uses a prescribed path of distributions to compute an estimate of a normalizing constant. We quantify how the choice of path impacts the estimation error.

project image

The Optimal Noise in Noise-Contrastive Learning Is Not What You Think


Omar Chehab, Alexandre Gramfort, Aapo Hyvärinen
Conference on Uncertainty in Artificial Intelligence (UAI), 2022
arxiv / code / poster /

NCE estimates the data density by minimizing a binary classification loss, between data and noise samples. We find the optimal noise distribution that minimizes the estimation error.

Learning Representations of Brain Activity

project image

MVICAD2: Multi-View Independent Component Analysis with Delays and Dilations


Ambroise Heurtebise, Omar Chehab, Pierre Ablin, Alexandre Gramfort
Arxiv, 2025
arxiv / code /

Independent Component Anlysis (ICA) is a popular algorithm for learning a representation of data. We propose a version that handles data collected from different contexts, and whose representations differ only by temporal delays or dilations.

project image

Deep Recurrent Encoder: an end-to-end network to model magnetoencephalography at scale


Omar Chehab*, Alexandre Defossez*, Jean-Christophe Loiseau, Alexandre Gramfort, Jean-Remi King
Journal of Neurons, Behavior, Data analysis, and Theory, 2022
arxiv / code /

We compare different models for predicting the brain’s response to external stimuli. Our model, based on a deep neural network, is more accurate and interpretable.

project image

Learning with self-supervision on EEG data


Alexandre Gramfort, Hubert Banville, Omar Chehab, Aapo Hyvärinen, Denis Engemann
IEEE workshop on Brain-Computer Interface, 2021
arxiv /

We learn rich representations of EEG brain activity using a self-supervised loss.

project image

Uncovering the structure of clinical EEG signals with self-supervised learning


Hubert Banville, Omar Chehab, Aapo Hyvärinen, Denis Engemann, Alexandre Gramfort
Journal of Neural Engineering, 2021
arxiv /

We learn rich representations of EEG brain activity using a self-supervised loss.

project image

A mean-field approach to the dynamics of networks of complex neurons, from nonlinear Integrate-and-Fire to Hodgkin–Huxley models


Mallory Carlu, Omar Chehab, Leonardo Dalla Porta, Damien Depannemaecker, Charlotte Héricé, Maciej Jedynak, Elif Köksal Ersöz, Paulo Muratore, Selma Souihe, Cristiano Capone, Yann Zerlaut, Alain Destexhe, Matteo di Volo
Journal of Neurophysiology, 2020
arxiv /

Our theory predicts the average behavior of neuronal populations that fire asynchronously.




Talks

* 10/2024, Gregory Wornell's team seminar, MIT, USA
* 10/2024, Youssef Marzouk's team seminar, MIT, USA
* 03/2024, Takeru Matsuda's team seminar, RIKEN, Japan
* 03/2023, Self-Supervised Learning Reading Group, Vector Institute, Canada
* 02/2020 First International Workshop on Nonlinear ICA, Inria, France

Teaching

I was Teacher's Assistant for the following Masters courses.


Optimization for Data Science - Institut Polytechnique de Paris (2021-2023)
Professors: Alexandre Gramfort, Pierre Ablin
Optimization - CentraleSupelec, Universite Paris-Saclay (2020-2021)
Professors: Jean-Christophe Pesquet, Sorin Olaru, Stephane Font
Advanced Machine Learning - CentraleSupelec, Universite Paris-Saclay (2020-2022)
Professors: Emilie Chouzenoux, Frederic Pascal

Service

I review machine learning papers for the NeurIPS, ICML, ICLR and AISTATS conferences. I am grateful to have been recognized as a "top reviewer" for AISTATS 2022 and for NeurIPS 2022, 2023 and 2024.



Design and source code from Jon Barron's website