Probabilistic inference, Monte Carlo methods, neural networks, point processes

I am a DPhil student of statistics at the University of Oxford supervised by Yee Whye Teh and Arnaud Doucet. I also spend two days a week as a Research Scientist at DeepMind. Previously, I received my MSc. from the University of Toronto supervised by Geoffrey Hinton. I was one of the primary contributors to the AlphaGo project.

Publications

2017

C. J. Maddison,
D. Lawson,
G. Tucker,
N. Heess,
M. Norouzi,
A. Mnih,
A. Doucet,
Y. W. Teh,
Particle Value Functions, in ICLR 2017 Workshop Proceedings, 2017.

The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agentâ€™s experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.

@inproceedings{MadLawTuc2017a,
author = {Maddison, C. J. and Lawson, D. and Tucker, G. and Heess, N. and Norouzi, M. and Mnih, A. and Doucet, A. and Teh, Y. W.},
booktitle = {ICLR 2017 Workshop Proceedings},
note = {ArXiv e-prints: 1703.05820},
title = {Particle Value Functions},
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1705.09279v1.pdf}
}

The evidence lower bound (ELBO) appears in many algorithms for maximum likelihood estimation (MLE) with latent variables because it is a sharp lower bound of the marginal log-likelihood. For neural latent variable models, optimizing the ELBO jointly in the variational posterior and model parameters produces state-of-the-art results. Inspired by the success of the ELBO as a surrogate MLE objective, we consider the extension of the ELBO to a family of lower bounds defined by a Monte Carlo estimator of the marginal likelihood. We show that the tightness of such bounds is asymptotically related to the variance of the underlying estimator. We introduce a special case, the filtering variational objectives (FIVOs), which takes the same arguments as the ELBO and passes them through a particle filter to form a tighter bound. FIVOs can be optimized tractably with stochastic gradients, and are particularly suited to MLE in sequential latent variable models. In standard sequential generative modeling tasks we present uniform improvements over models trained with ELBO, including some whole nat-per-timestep improvements.

@unpublished{MadLawTuc2017b,
author = {Maddison, C. J. and Lawson, D. and Tucker, G. and Heess, N. and Norouzi, M. and Mnih, A. and Doucet, A. and Teh, Y. W.},
note = {ArXiv e-prints: 1705.09279},
title = {Filtering Variational Objectives},
year = {2017}
}