Probabilistic inference, Deep learning, Generative models

I am a second year DPhil student of statistics at the University of Oxford supervised by Prof. Yee Whye Teh and Ryota Tomioka from Microsoft Research. Previously, I received a joint MSc. from Ecole des Ponts ParisTech and Ecole Normale Supérieure Paris-Saclay. My research interests lie in the fields of probabilistic generative models, representation learning and geometry.

@inproceedings{BloemReddy:etal:2018,
author = {Bloem-Reddy, Benjamin and Foster, Adam and Mathieu, Emile and Teh, Yee Whye},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
title = {Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks},
month = aug,
year = {2018}
}

We develop a generalised notion of disentanglement in Variational Auto-Encoders (VAEs) by casting it as a \emphdecomposition of the latent representation, characterised by i) enforcing an appropriate level of overlap in the latent encodings of the data, and ii) regularisation of the average encoding to a desired structure, represented through the prior. We motivate this by showing that a) the β-VAE disentangles purely through regularisation of the overlap in latent encodings, and through its average (Gaussian) encoder variance, and b) disentanglement, as independence between latents, can be cast as a regularisation of the aggregate posterior to a prior with specific characteristics. We validate this characterisation by showing that simple manipulations of these factors, such as using rotationally variant priors, can help improve disentanglement, and discuss how this characterisation provides a more general framework to incorporate notions of decomposition beyond just independence between the latents.

@article{mathieu2018,
title = {Disentangling Disentanglement},
author = {Mathieu, Emile and Rainforth, Tom and Narayanaswamy, Siddharth and Teh, Yee Whye},
journal = {NeurIPS Workshop on Bayesian Deep Learning},
year = {2018}
}

2017

B. Bloem-Reddy
,
E. Mathieu
,
A. Foster
,
T. Rainforth
,
H. Ge
,
M. Lomelí
,
Z. Ghahramani
,
Y. W. Teh
,
Sampling and inference for discrete random probability measures in probabilistic programs, NIPS Workshop on Advances in Approximate Bayesian Inference, 2017.

We consider the problem of sampling a sequence from a discrete random probability measure (RPM) with countable support, under (probabilistic) constraints of finite memory and computation. A canonical example is sampling from the Dirichlet Process, which can be accomplished using its stick-breaking representation and lazy initialization of its atoms. We show that efficiently lazy initialization is possible if and only if a size-biased representation of the discrete RPM is used. For models constructed from such discrete RPMs, we consider the implications for generic particle-based inference methods in probabilistic programming systems. To demonstrate, we implement SMC for Normalized Inverse Gaussian Process mixture models in Turing.

@article{bloemreddy2017rpm,
title = {Sampling and inference for discrete random probability measures in probabilistic programs},
author = {Bloem-Reddy, Benjamin and Mathieu, Emile and Foster, Adam and Rainforth, Tom and Ge, Hong and Lomelí, María and Ghahramani, Zoubin and Teh, Yee Whye},
journal = {NIPS Workshop on Advances in Approximate Bayesian Inference},
year = {2017}
}