Omar ChehabBonjour! Welcome to my website. I am a postdoc at Carnegie Mellon University, in the Machine Learning Department , working with Pradeep Ravikumar . I completed my graduate studies in France: this includes a PhD in Mathematical Computer Science at Inria with Aapo Hyvärinen and Alexandre Gramfort, and a postdoc in the Statistics Department of CREST-ENSAE with Anna Korba . More details are listed in my CV and on this profile page. You can reach me via email, find my code on GitHub, browse my publications on Google Scholar, or connect on LinkedIn. Research Talks Notes Teaching Service
Sampling from multi-modal distributions with polynomial query complexity in fixed dimension via reverse diffusion
Conference on Neural Information Processing Systems (NeurIPS), 2025
Time-reversed diffusions are state-of-the-art for sampling multi-modal distributions, but they rely on score estimates. We analyze how estimation errors affect the final samples.
Provable Convergence and Limitations of Geometric Tempering for Langevin Dynamics
International Conference on Learning Representations (ICLR), 2025
Annealed MCMC tries to approximate a prescribed path of distributions. We show that the popular geometric mean path with a Gaussian has unfavorable geometry. Presented at the Yale workshop on sampling.
A Practical Diffusion Path for Sampling
Workshop on Structured Probabilistic Inference & Generative Modeling, International Conference on Machine Learning (ICML), 2024
Time-reversed diffusions are state-of-the-art in sampling but rely on score estimates. We aim to reduce their variance.
Conditional Noise-Contrastive Estimation of Energy-Based Models by Jumping Between Modes
Workshop on Principles of Generative Modeling, EurIPS, 2025
We explore the design choices of a method called CNCE for learning energy-based models.
Optimizing the Noise in Self-Supervised Learning: from Importance Sampling to Noise-Contrastive Estimation
arXiv, 2023
Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond
Spotlight, Conference on Neural Information Processing Systems (NeurIPS), 2023
Annealed Importance Sampling uses a prescribed path of distributions to compute an estimate of a normalizing constant. We quantify how the choice of path impacts the estimation error.
The Optimal Noise in Noise-Contrastive Learning Is Not What You Think
Conference on Uncertainty in Artificial Intelligence (UAI), 2022
NCE estimates the data density by minimizing a binary classification loss, between data and noise samples. We find the optimal noise distribution that minimizes the estimation error.
Multi-View Causal Discovery without Non-Gaussianity: Identifiability and Algorithms
Oral, Workshop on Causality for Impact - Practical challenges for real-world applications of causal methods, EurIPS, 2025
We learn causal relationships (Directed Acyclic Graph) between random variables collected from different environments.
MVICAD2: Multi-View Independent Component Analysis with Delays and Dilations
IEEE Transactions on Biomedical Engineering, 2025
Independent Component Analysis (ICA) is a popular algorithm for learning a representation of data. We propose a version that handles data collected from different contexts, and whose representations differ only by temporal delays or dilations.
Deep Recurrent Encoder: an end-to-end network to model magnetoencephalography at scale
Journal of Neurons, Behavior, Data analysis, and Theory, 2022
We compare different models for predicting the brain’s response to external stimuli. Our model, based on a deep neural network, is more accurate and interpretable.
Learning with self-supervision on EEG data
IEEE workshop on Brain-Computer Interface, 2021
We learn rich representations of EEG brain activity using a self-supervised loss.
Uncovering the structure of clinical EEG signals with self-supervised learning
Journal of Neural Engineering, 2021
We learn rich representations of EEG brain activity using a self-supervised loss.
A mean-field approach to the dynamics of networks of complex neurons, from nonlinear Integrate-and-Fire to Hodgkin–Huxley models
Journal of Neurophysiology, 2020
Our theory predicts the average behavior of neuronal populations that fire asynchronously.
|