Adam Foster

Adam Foster

Probabilistic machine learning, deep learning, unsupervised representation learning, optimal experimental design, probabilistic programming

I am a DPhil student in Statistics at the University of Oxford, supervised by Yee Whye Teh and Tom Rainforth. I got my Bachelor’s and Master’s degrees in mathematics from Cambridge and worked as a machine learning engineer before joining the department. I have a broad range of interests in statistical machine learning. A large part of my work in Oxford has been on optimal experimental design: how do we design experiments that will be most informative about the process being investigated, whilst minimizing cost? I contribute to the deep probabilistic programming language Pyro and I am the main author of Pyro’s experimental design support. There is a deep mathematical connection between optimal experimental design, mutual information and contrastive representation learning. I also study contrastive learning from the perspectives of invariance and mutual information.

Other research interests of mine include the role of equivariance in machine learning, Bayesian modelling and probabilistic programming, and deep representation learning.



  • F. Bickford Smith , A. Kirsch , S. Farquhar , Y. Gal , A. Foster , T. Rainforth , Prediction-oriented Bayesian active learning, International Conference on Artificial Intelligence and Statistics, 2023.


  • A. Foster , R. Pukdee , T. Rainforth , Improving Transformation Invariance in Contrastive Representation Learning, International Conference on Learning Representations (ICLR), 2021.
  • A. Foster , D. R. Ivanova , I. Malik , T. Rainforth , Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design, International Conference on Machine Learning (ICML, long presentation), 2021.
  • E. Mathieu , A. Foster , Y. W. Teh , On Contrastive Representations of Stochastic Processes, 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.
  • D. R. Ivanova , A. Foster , S. Kleinegesse , M. U. Gutmann , T. Rainforth , Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods, 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.


  • A. Foster , M. Jankowiak , M. O’Meara , Y. W. Teh , T. Rainforth , A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments, International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
    Project: tencent-lsml


  • A. Foster , M. Jankowiak , E. Bingham , P. Horsfall , Y. W. Teh , T. Rainforth , N. Goodman , Variational Bayesian Optimal Experimental Design, Advances in Neural Information Processing Systems (NeurIPS, spotlight), 2019.
    Project: bigbayes


  • B. Bloem-Reddy , A. Foster , E. Mathieu , Y. W. Teh , Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks, in Conference on Uncertainty in Artificial Intelligence, 2018.
    Project: bigbayes
  • A. Foster , M. Jankowiak , E. Bingham , Y. W. Teh , T. Rainforth , N. Goodman , Variational Optimal Experiment Design: Efficient Automation of Adaptive Experiments, NeurIPS Workshop on Bayesian Deep Learning, 2018.
    Project: bigbayes


  • B. Bloem-Reddy , E. Mathieu , A. Foster , T. Rainforth , H. Ge , M. Lomelí , Z. Ghahramani , Y. W. Teh , Sampling and inference for discrete random probability measures in probabilistic programs, NIPS Workshop on Advances in Approximate Bayesian Inference, 2017.
    Project: bigbayes