I am a PhD student supervised by Tom Rainforth, Frank Wood and Yee Whye Teh.
I am interested in probabilistic programming and inference.
So far I have worked mostly on aspects of amortized inference, both for better learning of deep generative models and for speeding up inference on probabilistic programs.
A. Golinski
,
F. Wood
,
T. Rainforth
,
Amortized Monte Carlo Integration, International Conference on Machine Learning (ICML, Best Paper honorable mention), 2019.
Current approaches to amortizing Bayesian inference focus solely on
approximating the posterior distribution. Typically, this approximation is,
in turn, used to calculate expectations for one or more target functions—a
computational pipeline which is inefficient when the target function(s) are
known upfront. In this paper, we address this inefficiency by introducing
AMCI, a method for amortizing Monte Carlo integration directly. AMCI operates
similarly to amortized inference but produces three distinct amortized
proposals, each tailored to a different component of the overall expectation
calculation. At run-time, samples are produced separately from each amortized
proposal, before being combined to an overall estimate of the expectation.
We show that while existing approaches are fundamentally limited in the level
of accuracy they can achieve, AMCI can theoretically produce arbitrarily
small errors for any integrable target function using only a single sample
from each proposal at run-time. We further show that it is able to empirically
outperform the theoretically optimal self-normalized importance sampler on a
number of example problems. Furthermore, AMCI allows not only for amortizing
over datasets but also amortizing over target functions.
@article{golinski2018amci,
title = {{Amortized Monte Carlo Integration}},
author = {Golinski, Adam and Wood, Frank and Rainforth, Tom},
journal = {International Conference on Machine Learning (ICML, Best Paper honorable mention)},
year = {2019}
}
Current approaches to amortizing Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is in turn used to calculate expec- tations for one or more target functions. In this paper, we address the inefficiency of this computational pipeline when the target function(s) are known upfront. To this end, we introduce a method for amortizing Monte Carlo integration. Our approach operates in a similar manner to amortized inference, but tailors the produced amortization arti- facts to maximize the accuracy of the resulting expectation calculation(s). We show that while existing approaches have fundamental limitations in the level of accuracy that can be achieved for a given run time computational budget, our framework can produce arbitrary small errors for a wide range of target functions with O(1) computational cost at run time. Furthermore, our framework allows not only for amortizing over possible datasets, but also over possible target functions.
@inproceedings{golinski2018amcj,
title = {{Amortized Monte Carlo Integration}},
author = {Golinski, Adam and Teh, Yee Whye and Wood, Frank and Rainforth, Tom},
booktitle = {Symposium on Advances in Approximate Bayesian Inference},
year = {2018},
month = dec
}
S. Webb
,
A. Golinski
,
R. Zinkov
,
N. Siddharth
,
T. Rainforth
,
Y. W. Teh
,
F. Wood
,
Faithful Inversion of Generative Models for Effective Amortized Inference, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently. Generally, they require the inversion of the dependency structure in the generative model, as the modeller must learn a mapping from observations to distributions approximating the posterior. Previous approaches have involved inverting the dependency structure in a heuristic way that fails to capture these dependencies correctly, thereby limiting the achievable accuracy of the resulting approximations. We introduce an algorithm for faithfully, and minimally, inverting the graphical model structure of any generative model. Such inverses have two crucial properties: (a) they do not encode any independence assertions that are absent from the model and; (b) they are local maxima for the number of true independencies encoded. We prove the correctness of our approach and empirically show that the resulting minimally faithful inverses lead to better inference amortization than existing heuristic approaches.
@inproceedings{webb2018minimal,
title = {Faithful Inversion of Generative Models for Effective Amortized Inference},
author = {Webb, Stefan and Golinski, Adam and Zinkov, Robert and Siddharth, N. and Rainforth, Tom and Teh, Yee Whye and Wood, Frank},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2018}
}