Large scale machine learning, probabilistic inference, deep learning
I am a DPhil student in the OxWaSP centre for doctoral training supervised by Prof. Yee Whye Teh. I am interested in large scale Bayesian machine learning. For most of the last year I have been working on distributed Bayesian learning using stochastic natural gradient expectation propagation applied Bayesian neural networks. I am also interested in stochastic gradient Markov chain Monte Carlo methods.
Publications
2019
J. Merel
,
L. Hasenclever
,
A. Galashov
,
A. Ahuja
,
V. Pham
,
G. Wayne
,
Y. W. Teh
,
N. Heess
,
Neural Probabilistic Motor Primitives for Humanoid Control, in International Conference on Learning Representations (ICLR), 2019.
We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results.
@inproceedings{MerHasGal2019a,
author = {Merel, Josh and Hasenclever, Leonard and Galashov, Alexandre and Ahuja, Arun and Pham, Vu and Wayne, Greg and Teh, Yee Whye and Heess, Nicolas},
title = {Neural Probabilistic Motor Primitives for Humanoid Control},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
A. Galashov
,
S. M. Jayakumar
,
L. Hasenclever
,
D. Tirumala
,
J. Schwarz
,
G. Desjardins
,
W. M. Czarnecki
,
Y. W. Teh
,
R. Pascanu
,
N. Heess
,
Information asymmetry in KL-regularized RL, in International Conference on Learning Representations (ICLR), 2019.
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional component, a default policy. Instead of relying on a fixed default policy, we learn it from data. But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster. We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm. We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.
Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 ).
@inproceedings{GalJayHas2019a,
author = {Galashov, Alexandre and Jayakumar, Siddhant M. and Hasenclever, Leonard and Tirumala, Dhruva and Schwarz, Jonathan and Desjardins, Guillaume and Czarnecki, Wojciech M. and Teh, Yee Whye and Pascanu, Razvan and Heess, Nicolas},
title = {Information asymmetry in KL-regularized RL},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
2018
R. van den Berg
,
L. Hasenclever
,
J. M. Tomczak
,
M. Welling
,
Sylvester Normalizing Flows for Variational Inference, Mar-2018.
Variational inference relies on flexible approximate posterior distributions. Normalizing flows provide a general recipe to construct flexible variational posteriors. We introduce Sylvester normalizing flows, which can be seen as a generalization of planar flows. Sylvester normalizing flows remove the well-known single-unit bottleneck from planar flows, making a single transformation much more flexible. We compare the performance of Sylvester normalizing flows against planar flows and inverse autoregressive flows and demonstrate that they compare favorably on several datasets.
@unpublished{BergHasenclever2018,
archiveprefix = {arXiv},
arxivid = {1803.05649},
author = {{van den Berg}, Rianne and Hasenclever, Leonard and Tomczak, Jakub M. and Welling, Max},
eprint = {1803.05649},
month = mar,
title = {{Sylvester Normalizing Flows for Variational Inference}},
year = {2018}
}
2017
L. Hasenclever
,
S. Webb
,
T. Lienart
,
S. Vollmer
,
B. Lakshminarayanan
,
C. Blundell
,
Y. W. Teh
,
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research (JMLR), Oct. 2017.
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@article{HasWebLie2017a,
author = {Hasenclever, L. and Webb, S. and Lienart, T. and Vollmer, S. and Lakshminarayanan, B. and Blundell, C. and Teh, Y. W.},
note = {ArXiv e-prints: 1512.09327},
title = {Distributed {B}ayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server},
journal = {Journal of Machine Learning Research (JMLR)},
month = oct,
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1512.09327.pdf}
}
T. Nagapetyan
,
A. B. Duncan
,
L. Hasenclever
,
S. J. Vollmer
,
L. Szpruch
,
K. Zygalakis
,
The True Cost of Stochastic Gradient Langevin Dynamics, Jun-2017.
The problem of posterior inference is central to Bayesian statistics and a wealth of Markov Chain Monte Carlo (MCMC) methods have been proposed to obtain asymptotically correct samples from the posterior. As datasets in applications grow larger and larger, scalability has emerged as a central problem for MCMC methods. Stochastic Gradient Langevin Dynamics (SGLD) and related stochastic gradient Markov Chain Monte Carlo methods offer scalability by using stochastic gradients in each step of the simulated dynamics. While these methods are asymptotically unbiased if the stepsizes are reduced in an appropriate fashion, in practice constant stepsizes are used. This introduces a bias that is often ignored. In this paper we study the mean squared error of Lipschitz functionals in strongly log- concave models with i.i.d. data of growing data set size and show that, given a batchsize, to control the bias of SGLD the stepsize has to be chosen so small that the computational cost of reaching a target accuracy is roughly the same for all batchsizes. Using a control variate approach, the cost can be reduced dramatically. The analysis is performed by considering the algorithms as noisy discretisations of the Langevin SDE which correspond to the Euler method if the full data set is used. An important observation is that the 1scale of the step size is determined by the stability criterion if the accuracy is required for consistent credible intervals. Experimental results confirm our theoretical findings.
@unpublished{Nagapetyan2017,
month = jun,
author = {Nagapetyan, T. and Duncan, A. B. and Hasenclever, L. and Vollmer, S. J. and Szpruch, L. and Zygalakis, K.},
eprint = {1706.02692},
title = {{The True Cost of Stochastic Gradient Langevin Dynamics}},
year = {2017}
}
Hamiltonian Monte Carlo (HMC) is a popular Markov chain Monte Carlo (MCMC) algorithm that generates proposals for a Metropolis-Hastings algorithm by simulating the dynamics of a Hamiltonian system. However, HMC is sensitive to large time discretizations and performs poorly if there is a mismatch between the spatial geometry of the target distribution and the scales of the momentum distribution. In particular the mass matrix of HMC is hard to tune well. In order to alleviate these problems we propose relativistic Hamiltonian Monte Carlo, a version of HMC based on relativistic dynamics that introduce a maximum velocity on particles. We also derive stochastic gradient versions of the algorithm and show that the resulting algorithms bear interesting relationships to gradient clipping, RMSprop, Adagrad and Adam, popular optimisation methods in deep learning. Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo. In experiments we show that the relativistic algorithms perform better than classical Newtonian variants and Adam.
@inproceedings{LuPerHas2016a,
author = {Lu, X. and Perrone, V. and Hasenclever, L. and Teh, Y. W. and Vollmer, S. J.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Relativistic {M}onte {C}arlo},
month = apr,
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1609.04388v1.pdf}
}
L. Hasenclever
,
S. Webb
,
T. Lienart
,
S. Vollmer
,
B. Lakshminarayanan
,
C. Blundell
,
Y. W. Teh
,
Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research, vol. 18, no. 106, 1–37, 2017.
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@article{HasWebLie2015a,
author = {Hasenclever, Leonard and Webb, Stefan and Lienart, Thibaut and Vollmer, Sebastian and Lakshminarayanan, Balaji and Blundell, Charles and Teh, Yee Whye},
title = {Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server},
journal = {Journal of Machine Learning Research},
year = {2017},
volume = {18},
number = {106},
pages = {1-37}
}
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@software{HasWebLie2016a,
author = {Hasenclever, L. and Webb, S. and Lienart, T. and Vollmer, S. and Lakshminarayanan, B. and Blundell, C. and Teh, Y. W.},
title = {Posterior Server},
year = {2016},
bdsk-url-1 = {https://github.com/BigBayes/PosteriorServer}
}