Leonard Hasenclever

Leonard Hasenclever

Large scale machine learning, probabilistic inference, deep learning

I am a DPhil student in the OxWaSP centre for doctoral training supervised by Prof. Yee Whye Teh. I am interested in large scale Bayesian machine learning. For most of the last year I have been working on distributed Bayesian learning using stochastic natural gradient expectation propagation applied Bayesian neural networks. I am also interested in stochastic gradient Markov chain Monte Carlo methods.

Publications

2018

  • R. van den Berg, L. Hasenclever, J. M. Tomczak, M. Welling, Sylvester Normalizing Flows for Variational Inference, Mar-2018.

2017

  • L. Hasenclever, S. Webb, T. Lienart, S. Vollmer, B. Lakshminarayanan, C. Blundell, Y. W. Teh, Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research, vol. 18, no. 106, 1–37, 2017.
    Project: sgmcmc
  • T. Nagapetyan, A. B. Duncan, L. Hasenclever, S. J. Vollmer, L. Szpruch, K. Zygalakis, The True Cost of Stochastic Gradient Langevin Dynamics, Jun-2017.
  • X. Lu, V. Perrone, L. Hasenclever, Y. W. Teh, S. J. Vollmer, Relativistic Monte Carlo, in Artificial Intelligence and Statistics (AISTATS), 2017.
    Project: sgmcmc

2015

  • L. Hasenclever, S. Webb, T. Lienart, S. Vollmer, B. Lakshminarayanan, C. Blundell, Y. W. Teh, Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server, 2015.
    Project: sgmcmc

Software

2016

  • L. Hasenclever, S. Webb, T. Lienart, S. Vollmer, B. Lakshminarayanan, C. Blundell, Y. W. Teh, Posterior Server. 2016.
    Project: sgmcmc