S. Webb
,
T. Rainforth
,
Y. W. Teh
,
M. P. Kumar
,
A Statistical Approach to Assessing Neural Network Robustness, in International Conference on Learning Representations (ICLR), 2019.
We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.
@inproceedings{webb2018statistical,
title = {{A Statistical Approach to Assessing Neural Network Robustness}},
author = {Webb, Stefan and Rainforth, Tom and Teh, Yee Whye and Kumar, M Pawan},
booktitle = {International Conference on Learning Representations (ICLR)},
month = may,
year = {2019}
}
2018
S. Webb
,
A. Golinski
,
R. Zinkov
,
N. Siddharth
,
T. Rainforth
,
Y. W. Teh
,
F. Wood
,
Faithful Inversion of Generative Models for Effective Amortized Inference, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently. Generally, they require the inversion of the dependency structure in the generative model, as the modeller must learn a mapping from observations to distributions approximating the posterior. Previous approaches have involved inverting the dependency structure in a heuristic way that fails to capture these dependencies correctly, thereby limiting the achievable accuracy of the resulting approximations. We introduce an algorithm for faithfully, and minimally, inverting the graphical model structure of any generative model. Such inverses have two crucial properties: (a) they do not encode any independence assertions that are absent from the model and; (b) they are local maxima for the number of true independencies encoded. We prove the correctness of our approach and empirically show that the resulting minimally faithful inverses lead to better inference amortization than existing heuristic approaches.
@inproceedings{webb2018minimal,
title = {Faithful Inversion of Generative Models for Effective Amortized Inference},
author = {Webb, Stefan and Golinski, Adam and Zinkov, Robert and Siddharth, N. and Rainforth, Tom and Teh, Yee Whye and Wood, Frank},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2018}
}
2017
L. Hasenclever
,
S. Webb
,
T. Lienart
,
S. Vollmer
,
B. Lakshminarayanan
,
C. Blundell
,
Y. W. Teh
,
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research (JMLR), Oct. 2017.
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@article{HasWebLie2017a,
author = {Hasenclever, L. and Webb, S. and Lienart, T. and Vollmer, S. and Lakshminarayanan, B. and Blundell, C. and Teh, Y. W.},
note = {ArXiv e-prints: 1512.09327},
title = {Distributed {B}ayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server},
journal = {Journal of Machine Learning Research (JMLR)},
month = oct,
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1512.09327.pdf}
}
L. Hasenclever
,
S. Webb
,
T. Lienart
,
S. Vollmer
,
B. Lakshminarayanan
,
C. Blundell
,
Y. W. Teh
,
Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research, vol. 18, no. 106, 1–37, 2017.
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@article{HasWebLie2015a,
author = {Hasenclever, Leonard and Webb, Stefan and Lienart, Thibaut and Vollmer, Sebastian and Lakshminarayanan, Balaji and Blundell, Charles and Teh, Yee Whye},
title = {Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server},
journal = {Journal of Machine Learning Research},
year = {2017},
volume = {18},
number = {106},
pages = {1-37}
}
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@software{HasWebLie2016a,
author = {Hasenclever, L. and Webb, S. and Lienart, T. and Vollmer, S. and Lakshminarayanan, B. and Blundell, C. and Teh, Y. W.},
title = {Posterior Server},
year = {2016},
bdsk-url-1 = {https://github.com/BigBayes/PosteriorServer}
}