Bayesian nonparametrics, probabilistic learning, deep learning
I am a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at Google DeepMind. I am a European Research Council Consolidator Fellow and an Alan Turing Institute Faculty Fellow. I am interested in developing foundational methodologies for statistical machine learning.
Publications
2020
A. Foster
,
M. Jankowiak
,
M. O’Meara
,
Y. W. Teh
,
T. Rainforth
,
A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments, International Conference on Artificial Intelligence and Statistics (AISTATS, to appear), 2020.
We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). This is achieved through the use of variational lower bounds on the expected information gain (EIG) of an experiment that can be simultaneously optimized with respect to both the variational and design parameters. This allows the design process to be carried out through a single unified stochastic gradient ascent procedure, in contrast to existing approaches that typically construct an EIG estimator on a pointwise basis, before passing this estimator to a separate optimizer. We show that this, in turn, leads to more efficient BOED schemes and provide a number of a different variational objectives suited to different settings. Furthermore, we show that our gradient-based approaches are able to provide effective design optimization in substantially higher dimensional settings than existing approaches.
@article{foster2020unified,
title = {{A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments}},
author = {Foster, Adam and Jankowiak, Martin and O'Meara, Matthew and Teh, Yee Whye and Rainforth, Tom},
journal = {International Conference on Artificial Intelligence and Statistics (AISTATS, to appear)},
year = {2020}
}
2019
E. Dupont
,
A. Doucet
,
Y. W. Teh
,
Augmented Neural ODEs, in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alché-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, 3134–3144.
@incollection{NIPS2019_8577,
title = {Augmented Neural ODEs},
author = {Dupont, Emilien and Doucet, Arnaud and Teh, Yee Whye},
booktitle = {Advances in Neural Information Processing Systems 32},
editor = {Wallach, H. and Larochelle, H. and Beygelzimer, A. and d\textquotesingle Alch\'{e}-Buc, F. and Fox, E. and Garnett, R.},
pages = {3134--3144},
year = {2019},
month = dec,
publisher = {Curran Associates, Inc.}
}
E. Nalisnick
,
A. Matsukawa
,
Y. W. Teh
,
D. Gorur
,
B. Lakshminarayanan
,
Hybrid Models with Deep and Invertible Features, in International Conference on Machine Learning (ICML), 2019.
We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets|features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Yet the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.
@inproceedings{pmlr-v97-nalisnick19b,
title = {Hybrid Models with Deep and Invertible Features},
author = {Nalisnick, Eric and Matsukawa, Akihiro and Teh, Yee Whye and Gorur, Dilan and Lakshminarayanan, Balaji},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2019},
series = {Proceedings of Machine Learning Research},
month = jun,
publisher = {PMLR}
}
J. Lee
,
Y. Lee
,
J. Kim
,
A. Kosiorek
,
S. Choi
,
Y. W. Teh
,
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks, in International Conference on Machine Learning (ICML), 2019.
Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the order of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces the computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating the state-of-the-art performance compared to recent methods for set-structured data.
@inproceedings{pmlr-v97-lee19d,
title = {Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks},
author = {Lee, Juho and Lee, Yoonho and Kim, Jungtaek and Kosiorek, Adam and Choi, Seungjin and Teh, Yee Whye},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2019},
series = {Proceedings of Machine Learning Research},
month = jun,
publisher = {PMLR}
}
L. T. Elliott
,
M. De Iorio
,
S. Favaro
,
K. Adhikari
,
Y. W. Teh
,
Modeling Population Structure Under Hierarchical Dirichlet Processes, Bayesian Analysis, Jun. 2019.
We propose a Bayesian nonparametric model to infer population admixture, extending the hierarchical Dirichlet process to allow for correlation between loci due to linkage disequilibrium. Given multilocus genotype data from a sample of individuals, the proposed model allows inferring and classifying individuals as unadmixed or admixed, inferring the number of subpopulations ancestral to an admixed population and the population of origin of chromosomal regions. Our model does not assume any specific mutation process, and can be applied to most of the commonly used genetic markers. We present a Markov chain Monte Carlo (MCMC) algorithm to perform posterior inference from the model and we discuss some methods to summarize the MCMC output for the analysis of population admixture. Finally, we demonstrate the performance of the proposed model in a real application, using genetic data from the ectodysplasin-A receptor (EDAR) gene, which is considered to be ancestry-informative due to well-known variations in allele frequency as well as phenotypic effects across ancestry. The structure analysis of this dataset leads to the identification of a rare haplotype in Europeans. We also conduct a simulated experiment and show that our algorithm outperforms parametric methods.
@article{elliott2019,
author = {Elliott, Lloyd T. and De Iorio, Maria and Favaro, Stefano and Adhikari, Kaustubh and Teh, Yee Whye},
doi = {10.1214/17-BA1093},
firstavailable = {2018-05-19T02:03:09Z},
fjournal = {Bayesian Analysis},
journal = {Bayesian Analysis},
publisher = {International Society for Bayesian Analysis},
title = {Modeling Population Structure Under Hierarchical Dirichlet Processes},
month = jun,
year = {2019}
}
S. Webb
,
T. Rainforth
,
Y. W. Teh
,
M. P. Kumar
,
A Statistical Approach to Assessing Neural Network Robustness, in International Conference on Learning Representations (ICLR), 2019.
We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.
@inproceedings{webb2018statistical,
title = {{A Statistical Approach to Assessing Neural Network Robustness}},
author = {Webb, Stefan and Rainforth, Tom and Teh, Yee Whye and Kumar, M Pawan},
booktitle = {International Conference on Learning Representations (ICLR)},
month = may,
year = {2019}
}
H. Kim
,
A. Mnih
,
J. Schwarz
,
M. Garnelo
,
S. M. A. Eslami
,
D. Rosenbaum
,
O. Vinyals
,
Y. W. Teh
,
Attentive Neural Processes, in International Conference on Learning Representations (ICLR), 2019.
Neural Processes (NPs) (Garnelo et al., 2018) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.
@inproceedings{KimTeh2019a,
author = {Kim, H. and Mnih, A. and Schwarz, J. and Garnelo, M. and Eslami, S. M. A. and Rosenbaum, D. and Vinyals, O. and Teh, Y. W.},
title = {Attentive Neural Processes},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
J. Merel
,
L. Hasenclever
,
A. Galashov
,
A. Ahuja
,
V. Pham
,
G. Wayne
,
Y. W. Teh
,
N. Heess
,
Neural Probabilistic Motor Primitives for Humanoid Control, in International Conference on Learning Representations (ICLR), 2019.
We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results.
@inproceedings{MerHasGal2019a,
author = {Merel, Josh and Hasenclever, Leonard and Galashov, Alexandre and Ahuja, Arun and Pham, Vu and Wayne, Greg and Teh, Yee Whye and Heess, Nicolas},
title = {Neural Probabilistic Motor Primitives for Humanoid Control},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
E. Nalisnick
,
A. Matsukawa
,
Y. W. Teh
,
D. Gorur
,
B. Lakshminarayanan
,
Do Deep Generative Models Know What They Don’t Know?, in International Conference on Learning Representations (ICLR), 2019.
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are generally viewed to be robust to such overconfidence mistakes as modeling the density of the input features can be used to detect novel, out-of-distribution inputs.
In this paper we challenge this assumption, focusing our analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find that the model density cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. We find such behavior persists even when we restrict the flow models to constant-volume transformations. These admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature, which shows that such behavior is more general and not just restricted to the pairs of datasets used in our experiments. Our results suggest caution when using density estimates of deep generative models on out-of-distribution inputs.
@inproceedings{NalMatTeh2019a,
author = {Nalisnick, Eric and Matsukawa, Akihiro and Teh, Yee Whye and Gorur, Dilan and Lakshminarayanan, Balaji},
title = {Do Deep Generative Models Know What They Don't Know?},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
A. Galashov
,
S. M. Jayakumar
,
L. Hasenclever
,
D. Tirumala
,
J. Schwarz
,
G. Desjardins
,
W. M. Czarnecki
,
Y. W. Teh
,
R. Pascanu
,
N. Heess
,
Information asymmetry in KL-regularized RL, in International Conference on Learning Representations (ICLR), 2019.
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional component, a default policy. Instead of relying on a fixed default policy, we learn it from data. But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster. We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm. We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.
Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 ).
@inproceedings{GalJayHas2019a,
author = {Galashov, Alexandre and Jayakumar, Siddhant M. and Hasenclever, Leonard and Tirumala, Dhruva and Schwarz, Jonathan and Desjardins, Guillaume and Czarnecki, Wojciech M. and Teh, Yee Whye and Pascanu, Razvan and Heess, Nicolas},
title = {Information asymmetry in KL-regularized RL},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
@article{Bloem-Reddy:Teh:2019,
archiveprefix = {arXiv},
author = {Bloem-Reddy, Benjamin and Teh, Yee Whye},
eprint = {1901.06082},
month = jan,
primaryclass = {stat.ML},
title = {Probabilistic symmetry and invariant neural networks},
year = {2019}
}
B. Gram-Hansen
,
C. S. Witt
,
T. Rainforth
,
P. H. Torr
,
Y. W. Teh
,
A. G. Baydin
,
Hijacking Malaria Simulators with Probabilistic Programming, in International Conference on Machine Learning (ICML) AI for Social Good workshop (AI4SG), 2019.
@inproceedings{gram2019hijacking,
title = {Hijacking Malaria Simulators with Probabilistic Programming},
author = {Gram-Hansen, Bradley and de Witt, Christian Schr{\"o}der and Rainforth, Tom and Torr, Philip HS and Teh, Yee Whye and Baydin, At{\i}l{\i}m G{\"u}ne{\c{s}}},
booktitle = {International Conference on Machine Learning (ICML) AI for Social Good workshop (AI4SG)},
year = {2019}
}
E. Mathieu
,
T. Rainforth
,
N. Siddharth
,
Y. W. Teh
,
Disentangling Disentanglement in Variational Autoencoders, in Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, USA, 2019, vol. 97, 4402–4412.
We develop a generalisation of disentanglement in VAEs—decomposition of the latent representation—characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior. Decomposition permits disentanglement, i.e. explicit independence between latents, as a special case, but also allows for a much richer class of properties to be imposed on the learnt representation, such as sparsity, clustering, independent subspaces, or even intricate hierarchical dependency relationships. We show that the β-VAE varies from the standard VAE predominantly in its control of latent overlap and that for the standard choice of an isotropic Gaussian prior, its objective is invariant to rotations of the latent representation. Viewed from the decomposition perspective, breaking this invariance with simple manipulations of the prior can yield better disentanglement with little or no detriment to reconstructions. We further demonstrate how other choices of prior can assist in producing different decompositions and introduce an alternative training objective that allows the control of both decomposition factors in a principled manner.
@inproceedings{pmlr-v97-mathieu19a,
title = {Disentangling Disentanglement in Variational Autoencoders},
author = {Mathieu, Emile and Rainforth, Tom and Siddharth, N and Teh, Yee Whye},
booktitle = {Proceedings of the 36th International Conference on Machine Learning},
pages = {4402--4412},
year = {2019},
volume = {97},
series = {Proceedings of Machine Learning Research},
address = {Long Beach, California, USA},
month = {09--15 Jun},
publisher = {PMLR}
}
A. Foster
,
M. Jankowiak
,
E. Bingham
,
P. Horsfall
,
Y. W. Teh
,
T. Rainforth
,
N. Goodman
,
Variational Bayesian Optimal Experimental Design, Advances in Neural Information Processing Systems (NeurIPS, spotlight), 2019.
Bayesian optimal experimental design (BOED) is a principled framework
for making efficient use of limited experimental resources. Unfortunately,
its applicability is hampered by the difficulty of obtaining accurate estimates
of the expected information gain (EIG) of an experiment. To address this, we
introduce several classes of fast EIG estimators by building on ideas from
amortized variational inference. We show theoretically and empirically that
these estimators can provide significant gains in speed and accuracy over
previous approaches. We further demonstrate the practicality of our approach
on a number of end-to-end experiments.
@article{foster2019variational,
title = {{Variational Bayesian Optimal Experimental Design}},
author = {Foster, Adam and Jankowiak, Martin and Bingham, Eli and Horsfall, Paul and Teh, Yee Whye and Rainforth, Tom and Goodman, Noah},
journal = {Advances in Neural Information Processing Systems (NeurIPS, spotlight)},
year = {2019}
}
J. Ton
,
L. Chan
,
Y. W. Teh
,
D. Sejdinovic
,
Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings, ArXiv e-prints:1906.02236, 2019.
Current meta-learning approaches focus on learning functional representations of relationships between variables, i.e. on estimating conditional expectations in regression. In many applications, however, we are faced with conditional distributions which cannot be meaningfully summarized using expectation only (due to e.g. multimodality). Hence, we consider the problem of conditional density estimation in the meta-learning setting. We introduce a novel technique for meta-learning which combines neural representation and noise-contrastive estimation with the established literature of conditional mean embeddings into reproducing kernel Hilbert spaces. The method is validated on synthetic and real-world problems, demonstrating the utility of sharing learned representations across multiple conditional density estimation tasks.
@unpublished{TonChaTehSej2019,
author = {Ton, Jean-Francois and Chan, Lucian and Teh, Yee Whye and Sejdinovic, Dino},
title = {{{Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings}}},
journal = {ArXiv e-prints:1906.02236},
year = {2019}
}
Y. Zhou
,
H. Yang
,
Y. W. Teh
,
T. Rainforth
,
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support, arXiv preprint arXiv:1910.13324, 2019.
@article{zhou2019divide,
title = {Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support},
author = {Zhou, Yuan and Yang, Hongseok and Teh, Yee Whye and Rainforth, Tom},
journal = {arXiv preprint arXiv:1910.13324},
year = {2019}
}
2018
X. Miscouridou
,
F. Caron
,
Y. W. Teh
,
Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
@inproceedings{HawkesInteractions,
author = {Miscouridou, Xenia and Caron, Fran\c ois and Teh, Yee Whye},
title = {Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
note = {ArXiv e-prints: 1803.06070},
year = {2018},
month = dec
}
Current approaches to amortizing Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is in turn used to calculate expec- tations for one or more target functions. In this paper, we address the inefficiency of this computational pipeline when the target function(s) are known upfront. To this end, we introduce a method for amortizing Monte Carlo integration. Our approach operates in a similar manner to amortized inference, but tailors the produced amortization arti- facts to maximize the accuracy of the resulting expectation calculation(s). We show that while existing approaches have fundamental limitations in the level of accuracy that can be achieved for a given run time computational budget, our framework can produce arbitrary small errors for a wide range of target functions with O(1) computational cost at run time. Furthermore, our framework allows not only for amortizing over possible datasets, but also over possible target functions.
@inproceedings{golinski2018amcj,
title = {{Amortized Monte Carlo Integration}},
author = {Golinski, Adam and Teh, Yee Whye and Wood, Frank and Rainforth, Tom},
booktitle = {Symposium on Advances in Approximate Bayesian Inference},
year = {2018},
month = dec
}
J. Mitrovic
,
D. Sejdinovic
,
Y. Teh
,
Causal Inference via Kernel Deviance Measures, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
Discovering the causal structure among a set of variables is a fundamental problem in many areas of science. In this paper, we propose Kernel Conditional Deviance for Causal Inference (KCDC) a fully nonparametric causal discovery method based on purely observational data. From a novel interpretation of the notion of asymmetry between cause and effect, we derive a corresponding asymmetry measure using the framework of reproducing kernel Hilbert spaces. Based on this, we propose three decision rules for causal discovery. We demonstrate the wide applicability of our method across a range of diverse synthetic datasets. Furthermore, we test our method on real-world time series data and the real-world benchmark dataset Tubingen Cause-Effect Pairs where we outperform existing state-of-the-art methods.
@inproceedings{MitSejTeh2018,
author = {Mitrovic, J. and Sejdinovic, D. and Teh, Y.W.},
title = {{{Causal Inference via Kernel Deviance Measures}}},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2018},
month = dec
}
J. Chen
,
J. Zhu
,
Y. W. Teh
,
T. Zhang
,
Stochastic Expectation Maximization with Variance Reduction, in Advances in Neural Information Processing Systems (NeurIPS), 2018, 7978–7988.
@inproceedings{NIPS2018_8021,
author = {Chen, Jianfei and Zhu, Jun and Teh, Yee Whye and Zhang, Tong},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
editor = {Bengio, S. and Wallach, H. and Larochelle, H. and Grauman, K. and Cesa-Bianchi, N. and Garnett, R.},
pages = {7978--7988},
publisher = {Curran Associates, Inc.},
title = {Stochastic Expectation Maximization with Variance Reduction},
year = {2018},
month = dec,
bdsk-url-1 = {http://papers.nips.cc/paper/8021-stochastic-expectation-maximization-with-variance-reduction.pdf}
}
@inproceedings{BloemReddy:etal:2018,
author = {Bloem-Reddy, Benjamin and Foster, Adam and Mathieu, Emile and Teh, Yee Whye},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
title = {Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks},
month = aug,
year = {2018}
}
We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator. Our results call into question common implicit assumptions that tighter ELBOs are better variational objectives for simultaneous model learning and inference amortization schemes. Based on our insights, we introduce three new algorithms: the partially importance weighted auto-encoder (PIWAE), the multiply importance weighted auto-encoder (MIWAE), and the combination importance weighted auto-encoder (CIWAE), each of which includes the standard importance weighted auto-encoder (IWAE) as a special case. We show that each can deliver improvements over IWAE, even when performance is measured by the IWAE target itself. Moreover, PIWAE can simultaneously deliver improvements in both the quality of the inference network and generative network, relative to IWAE.
@inproceedings{rainforth2018tighter,
title = {Tighter Variational Bounds are Not Necessarily Better},
author = {Rainforth, Tom and Kosiorek, Adam R. and Le, Tuan Anh and Maddison, Chris J. and Igl, Maximilian and Wood, Frank and Teh, Yee Whye},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2018},
month = jul
}
M. Battiston
,
S. Favaro
,
D. M. Roy
,
Y. W. Teh
,
A Characterization of Product-Form Exchangeable Feature Probability Functions, Annals of Applied Probability, vol. 28, Jun. 2018.
We characterize the class of exchangeable feature allocations assigning probability V_n,k ∏^k_l=1 W_m_lU_n−m_l to a feature allocation of n individuals, displaying k features with counts (m_1,\ldots,m_k) for these features. Each element of this class is parametrized by a countable matrix V and two sequences U and W of non-negative weights. Moreover, a consistency condition is imposed to guarantee that the distribution for feature allocations of n−1 individuals is recovered from that of n individuals, when the last individual is integrated out. In Theorem 1.1, we prove that the only members of this class satisfying the consistency condition are mixtures of the Indian Buffet Process over its mass parameter γ and mixtures of the Beta–Bernoulli model over its dimensionality parameter N. Hence, we provide a characterization of these two models as the only, up to randomization of the parameters, consistent exchangeable feature allocations having the required product form.
@article{BatFavRoy2016a,
author = {Battiston, M. and Favaro, S. and Roy, D. M. and Teh, Y. W.},
journal = {Annals of Applied Probability},
title = {A Characterization of Product-Form Exchangeable Feature Probability Functions},
volume = {28},
year = {2018},
month = jun,
bdsk-url-1 = {https://arxiv.org/pdf/1607.02066.pdf}
}
H. Kim
,
Y. W. Teh
,
Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes, in Artificial Intelligence and Statistics (AISTATS), 2018.
Automating statistical modelling is a challenging problem in artificial intelligence. The Automatic Statistician takes a first step in this direction, by employing a kernel search algorithm with Gaussian Processes (GP) to provide interpretable statistical models for regression problems. However this does not scale due to its O(N^3) running time for the model selection. We propose Scalable Kernel Composition (SKC), a scalable kernel search algorithm that extends the Automatic Statistician to bigger data sets. In doing so, we derive a cheap upper bound on the GP marginal likelihood that sandwiches the marginal likelihood with the variational lower bound . We show that the upper bound is significantly tighter than the lower bound and thus useful for model selection.
@inproceedings{KimTeh18,
author = {Kim, H. and Teh, Y. W.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes},
month = apr,
year = {2018}
}
M. Battiston
,
S. Favaro
,
Y. W. Teh
,
Bayesian nonparametric approaches to sample-size estimation for finding unseen species, 2018.
@unpublished{Battiston:Favaro:Teh:2018b,
author = {Battiston, M. and Favaro, S. and Teh, Y. W.},
title = {Bayesian nonparametric approaches to sample-size estimation for finding unseen species},
year = {2018}
}
B. Bloem-Reddy
,
Y. W. Teh
,
Neural network models of exchangeable sequences, NeurIPS Workshop on Bayesian Deep Learning, 2018.
@article{BloemReddy:Teh:2018aa,
author = {Bloem-Reddy, Benjamin and Teh, Yee Whye},
journal = {NeurIPS Workshop on Bayesian Deep Learning},
title = {Neural network models of exchangeable sequences},
year = {2018}
}
A. Foster
,
M. Jankowiak
,
E. Bingham
,
Y. W. Teh
,
T. Rainforth
,
N. Goodman
,
Variational Optimal Experiment Design: Efficient Automation of Adaptive Experiments, NeurIPS Workshop on Bayesian Deep Learning, 2018.
Bayesian optimal experimental design (OED) is a principled framework for making efficient use of limited experimental resources. Unfortunately, the applicability of OED is hampered by the difficulty of obtaining accurate estimates of the expected information gain (EIG) for different experimental designs. We introduce a class of fast EIG estimators that leverage amortised variational inference and show that they provide substantial empirical gains over previous approaches. We integrate our approach into a deep probabilistic programming framework, thus making OED accessible to practitioners at large.
@article{foster2018voed,
title = {{Variational Optimal Experiment Design: Efficient Automation of Adaptive Experiments}},
author = {Foster, Adam and Jankowiak, Martin and Bingham, Eli and Teh, Yee Whye and Rainforth, Tom and Goodman, Noah},
journal = {NeurIPS Workshop on Bayesian Deep Learning},
year = {2018}
}
S. Webb
,
A. Golinski
,
R. Zinkov
,
N. Siddharth
,
T. Rainforth
,
Y. W. Teh
,
F. Wood
,
Faithful Inversion of Generative Models for Effective Amortized Inference, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently. Generally, they require the inversion of the dependency structure in the generative model, as the modeller must learn a mapping from observations to distributions approximating the posterior. Previous approaches have involved inverting the dependency structure in a heuristic way that fails to capture these dependencies correctly, thereby limiting the achievable accuracy of the resulting approximations. We introduce an algorithm for faithfully, and minimally, inverting the graphical model structure of any generative model. Such inverses have two crucial properties: (a) they do not encode any independence assertions that are absent from the model and; (b) they are local maxima for the number of true independencies encoded. We prove the correctness of our approach and empirically show that the resulting minimally faithful inverses lead to better inference amortization than existing heuristic approaches.
@inproceedings{webb2018minimal,
title = {Faithful Inversion of Generative Models for Effective Amortized Inference},
author = {Webb, Stefan and Golinski, Adam and Zinkov, Robert and Siddharth, N. and Rainforth, Tom and Teh, Yee Whye and Wood, Frank},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2018}
}
A. R. Kosiorek
,
H. Kim
,
Y. W. Teh
,
I. Posner
,
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et. al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision.
@inproceedings{koskimposteh18,
title = {Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects},
author = {Kosiorek, Adam R. and Kim, Hyunjik and Teh, Yee Whye and Posner, Ingmar},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2018}
}
T. A. Le
,
A. R. Kosiorek
,
N. Siddharth
,
Y. W. Teh
,
F. Wood
,
Revisiting Reweighted Wake-Sleep, CoRR, vol. abs/1805.10469, 2018.
@article{Le2018RevisitingRW,
title = {Revisiting Reweighted Wake-Sleep},
author = {Le, Tuan Anh and Kosiorek, Adam R. and Siddharth, N. and Teh, Yee Whye and Wood, Frank},
journal = {CoRR},
year = {2018},
volume = {abs/1805.10469}
}
T. Rainforth
,
Y. Zhou
,
X. Lu
,
Y. W. Teh
,
F. Wood
,
H. Yang
,
J. Meent
,
Inference Trees: Adaptive Inference with Exploration, arXiv preprint arXiv:1806.09550, 2018.
We introduce inference trees (ITs), a new class of inference methods that build
on ideas from Monte Carlo tree search to perform adaptive sampling in a manner
that balances exploration with exploitation, ensures consistency, and alleviates
pathologies in existing adaptive methods. ITs adaptively sample from hierarchical
partitions of the parameter space, while simultaneously learning these partitions
in an online manner. This enables ITs to not only identify regions of high posterior
mass, but also maintain uncertainty estimates to track regions where significant
posterior mass may have been missed. ITs can be based on any inference method
that provides a consistent estimate of the marginal likelihood. They are particularly
effective when combined with sequential Monte Carlo, where they capture long-range
dependencies and yield improvements beyond proposal adaptation alone.
@article{rainforth2018it,
title = {Inference Trees: Adaptive Inference with Exploration},
author = {Rainforth, Tom and Zhou, Yuan and Lu, Xiaoyu and Teh, Yee Whye and Wood, Frank and Yang, Hongseok and van de Meent, Jan-Willem},
journal = {arXiv preprint arXiv:1806.09550},
year = {2018}
}
X. Lu
,
T. Rainforth
,
Y. Zhou
,
J. Meent
,
Y. W. Teh
,
On Exploration, Exploitation and Learning in Adaptive Importance Sampling, arXiv preprint arXiv:1810.13296, 2018.
We study adaptive importance sampling (AIS) as an online learning problem and argue for the importance of the trade-off between exploration and exploitation in this adaptation. Borrowing ideas from the bandits literature, we propose Daisee, a partition-based AIS algorithm. We further introduce a notion of regret for AIS
and show that Daisee has O((log T)^(3/4) √T) cumulative pseudo-regret, where T is
the number of iterations. We then extend Daisee to adaptively learn a hierarchical partitioning of the sample space for more efficient sampling and confirm the performance of both algorithms empirically.
@article{lu2018exploration,
title = {{On Exploration, Exploitation and Learning in Adaptive Importance Sampling}},
author = {Lu, Xiaoyu and Rainforth, Tom and Zhou, Yuan and van de Meent, Jan-Willem and Teh, Yee Whye},
journal = {arXiv preprint arXiv:1810.13296},
year = {2018}
}
T. G. J. Rudner
,
V. Fortuin
,
Y. W. Teh
,
Y. Gal
,
On the Connection between Neural Processes and Approximate Gaussian Processes, NeurIPS 2018 Workshop on Bayesian Deep Learning, 2018.
@article{Rudner:etal:2018,
author = {Rudner, Tim G. J. and Fortuin, Vincent and Teh, Yee Whye and Gal, Yarin},
title = {{O}n the {C}onnection between {N}eural {P}rocesses and {A}pproximate {G}aussian {P}rocesses},
journal = {NeurIPS 2018 Workshop on Bayesian Deep Learning},
year = {2018}
}
2017
C. J. Maddison
,
D. Lawson
,
G. Tucker
,
N. Heess
,
M. Norouzi
,
A. Mnih
,
A. Doucet
,
Y. W. Teh
,
Filtering Variational Objectives, in Advances in Neural Information Processing Systems (NeurIPS), 2017.
The evidence lower bound (ELBO) appears in many algorithms for maximum likelihood estimation (MLE) with latent variables because it is a sharp lower bound of the marginal log-likelihood. For neural latent variable models, optimizing the ELBO jointly in the variational posterior and model parameters produces state-of-the-art results. Inspired by the success of the ELBO as a surrogate MLE objective, we consider the extension of the ELBO to a family of lower bounds defined by a Monte Carlo estimator of the marginal likelihood. We show that the tightness of such bounds is asymptotically related to the variance of the underlying estimator. We introduce a special case, the filtering variational objectives (FIVOs), which takes the same arguments as the ELBO and passes them through a particle filter to form a tighter bound. FIVOs can be optimized tractably with stochastic gradients, and are particularly suited to MLE in sequential latent variable models. In standard sequential generative modeling tasks we present uniform improvements over models trained with ELBO, including some whole nat-per-timestep improvements.
@inproceedings{MadLawTuc2017b,
author = {Maddison, C. J. and Lawson, D. and Tucker, G. and Heess, N. and Norouzi, M. and Mnih, A. and Doucet, A. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Filtering Variational Objectives},
year = {2017},
month = dec,
bdsk-url-1 = {https://arxiv.org/pdf/1705.09279v1.pdf}
}
V. Perrone
,
P. A. Jenkins
,
D. Spano
,
Y. W. Teh
,
Poisson Random Fields for Dynamic Feature Models, Journal of Machine Learning Research (JMLR), Dec. 2017.
@article{PerJenSpa2017a,
author = {Perrone, V. and Jenkins, P. A. and Spano, D. and Teh, Y. W.},
journal = {Journal of Machine Learning Research (JMLR)},
month = dec,
year = {2017},
title = {{P}oisson Random Fields for Dynamic Feature Models},
bdsk-url-1 = {https://arxiv.org/abs/1611.07460},
bdsk-url-2 = {https://arxiv.org/pdf/1611.07460v1.pdf}
}
G. Di Benedetto
,
F. Caron
,
Y. W. Teh
,
Non-exchangeable random partition models for microclustering, Nov-2017.
@unpublished{DiBenedetto2017,
author = {Di Benedetto, Giuseppe and Caron, Fran{\c{c}}ois and Teh, Yee Whye},
title = {Non-exchangeable random partition models for microclustering},
note = {ArXiv e-prints:1711.07287},
archiveprefix = {arXiv},
year = {2017},
month = nov
}
L. Hasenclever
,
S. Webb
,
T. Lienart
,
S. Vollmer
,
B. Lakshminarayanan
,
C. Blundell
,
Y. W. Teh
,
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research (JMLR), Oct. 2017.
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@article{HasWebLie2017a,
author = {Hasenclever, L. and Webb, S. and Lienart, T. and Vollmer, S. and Lakshminarayanan, B. and Blundell, C. and Teh, Y. W.},
note = {ArXiv e-prints: 1512.09327},
title = {Distributed {B}ayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server},
journal = {Journal of Machine Learning Research (JMLR)},
month = oct,
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1512.09327.pdf}
}
J. Arbel
,
S. Favaro
,
B. Nipoti
,
Y. W. Teh
,
Bayesian nonparametric inference for discovery probabilities: credible intervals and large sample asymptotics, Statistica Sinica, Apr. 2017.
Given a sample of size n from a population of individuals belonging to different species with unknown proportions, a popular problem of practical interest consists in making inference on the probability Dn(l) that the (n+1)-th draw coincides with a species with frequency l in the sample, for any l = 0,1,...,n. This paper contributes to the methodology of Bayesian nonparametric inference for Dn(l). Specifically, under the general framework of Gibbs-type priors we show how to derive credible intervals for a Bayesian nonparametric estimation of Dn(l), and we investigate the large n asymptotic behaviour of such an estimator. Of particular interest are special cases of our results obtained under the specification of the two parameter Poisson–Dirichlet prior and the normalized generalized Gamma prior, which are two of the most commonly used Gibbs-type priors. With respect to these two prior specifications, the proposed results are illustrated through a simulation study and a benchmark Expressed Sequence Tags dataset. To the best our knowledge, this illustration provides the first comparative study between the two parameter Poisson–Dirichlet prior and the normalized generalized Gamma prior in the context of Bayesian nonparemetric inference for Dn(l).
@article{ArbFavNip2017a,
author = {Arbel, J. and Favaro, S. and Nipoti, B. and Teh, Y. W.},
journal = {Statistica Sinica},
title = {{B}ayesian nonparametric inference for discovery probabilities: credible intervals and large sample asymptotics},
year = {2017},
month = apr
}
Hamiltonian Monte Carlo (HMC) is a popular Markov chain Monte Carlo (MCMC) algorithm that generates proposals for a Metropolis-Hastings algorithm by simulating the dynamics of a Hamiltonian system. However, HMC is sensitive to large time discretizations and performs poorly if there is a mismatch between the spatial geometry of the target distribution and the scales of the momentum distribution. In particular the mass matrix of HMC is hard to tune well. In order to alleviate these problems we propose relativistic Hamiltonian Monte Carlo, a version of HMC based on relativistic dynamics that introduce a maximum velocity on particles. We also derive stochastic gradient versions of the algorithm and show that the resulting algorithms bear interesting relationships to gradient clipping, RMSprop, Adagrad and Adam, popular optimisation methods in deep learning. Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo. In experiments we show that the relativistic algorithms perform better than classical Newtonian variants and Adam.
@inproceedings{LuPerHas2016a,
author = {Lu, X. and Perrone, V. and Hasenclever, L. and Teh, Y. W. and Vollmer, S. J.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Relativistic {M}onte {C}arlo},
month = apr,
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1609.04388v1.pdf}
}
L. Hasenclever
,
S. Webb
,
T. Lienart
,
S. Vollmer
,
B. Lakshminarayanan
,
C. Blundell
,
Y. W. Teh
,
Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server, Journal of Machine Learning Research, vol. 18, no. 106, 1–37, 2017.
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@article{HasWebLie2015a,
author = {Hasenclever, Leonard and Webb, Stefan and Lienart, Thibaut and Vollmer, Sebastian and Lakshminarayanan, Balaji and Blundell, Charles and Teh, Yee Whye},
title = {Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server},
journal = {Journal of Machine Learning Research},
year = {2017},
volume = {18},
number = {106},
pages = {1-37}
}
C. J. Maddison
,
A. Mnih
,
Y. W. Teh
,
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables, in International Conference on Learning Representations (ICLR), 2017.
The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate effectiveness of concrete relaxations on density estimation and structured prediction tasks using neural networks.
@inproceedings{maddison2016concrete,
author = {Maddison, Chris J. and Mnih, Andriy and Teh, Yee Whye},
booktitle = {International Conference on Learning Representations (ICLR)},
note = {ArXiv e-prints:1611.00712},
title = {{The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables}},
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1611.00712.pdf}
}
B. Bloem-Reddy
,
E. Mathieu
,
A. Foster
,
T. Rainforth
,
H. Ge
,
M. Lomelí
,
Z. Ghahramani
,
Y. W. Teh
,
Sampling and inference for discrete random probability measures in probabilistic programs, NIPS Workshop on Advances in Approximate Bayesian Inference, 2017.
We consider the problem of sampling a sequence from a discrete random probability measure (RPM) with countable support, under (probabilistic) constraints of finite memory and computation. A canonical example is sampling from the Dirichlet Process, which can be accomplished using its stick-breaking representation and lazy initialization of its atoms. We show that efficiently lazy initialization is possible if and only if a size-biased representation of the discrete RPM is used. For models constructed from such discrete RPMs, we consider the implications for generic particle-based inference methods in probabilistic programming systems. To demonstrate, we implement SMC for Normalized Inverse Gaussian Process mixture models in Turing.
@article{bloemreddy2017rpm,
title = {Sampling and inference for discrete random probability measures in probabilistic programs},
author = {Bloem-Reddy, Benjamin and Mathieu, Emile and Foster, Adam and Rainforth, Tom and Ge, Hong and Lomelí, María and Ghahramani, Zoubin and Teh, Yee Whye},
journal = {NIPS Workshop on Advances in Approximate Bayesian Inference},
year = {2017}
}
S. Flaxman
,
Y. Teh
,
D. Sejdinovic
,
Poisson Intensity Estimation with Reproducing Kernels, Electronic Journal of Statistics, vol. 11, no. 2, 5081–5104, 2017.
Despite the fundamental nature of the inhomogeneous Pois-
son process in the theory and application of stochastic processes, and its
attractive generalizations (e.g. Cox process), few tractable nonparametric
modeling approaches of intensity functions exist, especially when observed
points lie in a high-dimensional space. In this paper we develop a new,
computationally tractable Reproducing Kernel Hilbert Space (RKHS) for-
mulation for the inhomogeneous Poisson process. We model the square root
of the intensity as an RKHS function. Whereas RKHS models used in su-
pervised learning rely on the so-called representer theorem, the form of
the inhomogeneous Poisson process likelihood means that the representer
theorem does not apply. However, we prove that the representer theorem
does hold in an appropriately transformed RKHS, guaranteeing that the
optimization of the penalized likelihood can be cast as a tractable finite-
dimensional problem. The resulting approach is simple to implement, and
readily scales to high dimensions and large-scale datasets.
@article{FlaTehSej2017ejs,
author = {Flaxman, S. and Teh, Y.W. and Sejdinovic, D.},
title = {{{Poisson Intensity Estimation with Reproducing Kernels}}},
journal = {Electronic Journal of Statistics},
year = {2017},
volume = {11},
number = {2},
pages = {5081--5104}
}
J. Mitrovic
,
D. Sejdinovic
,
Y. W. Teh
,
Deep Kernel Machines via the Kernel Reparametrization Trick, in International Conference on Learning Representations (ICLR) Workshop Track, 2017.
While deep neural networks have achieved state-of-the-art performance on many tasks across varied domains, they still remain black boxes whose inner workings are hard to interpret and understand. In this paper, we develop a novel method for efficiently capturing the behaviour of deep neural networks using kernels. In particular, we construct a hierarchy of increasingly complex kernels that encode individual hidden layers of the network. Furthermore, we discuss how our framework motivates a novel supervised weight initialization method that discovers highly discriminative features already at initialization.
@inproceedings{MitSejTeh2017,
author = {Mitrovic, J. and Sejdinovic, D. and Teh, Y. W.},
booktitle = {International Conference on Learning Representations (ICLR) Workshop Track},
title = {{Deep Kernel Machines via the Kernel Reparametrization Trick}},
year = {2017},
bdsk-url-1 = {https://openreview.net/forum?id=Bkiqt3Ntg¬eId=Bkiqt3Ntg}
}
S. Flaxman
,
Y. W. Teh
,
D. Sejdinovic
,
Poisson Intensity Estimation with Reproducing Kernels, in Artificial Intelligence and Statistics (AISTATS), 2017.
Despite the fundamental nature of the inhomogeneous Poisson process in the theory and application of stochastic processes, and its attractive generalizations (e.g. Cox process), few tractable nonparametric modeling approaches of intensity functions exist, especially in high dimensional settings. In this paper we develop a new, computationally tractable Reproducing Kernel Hilbert Space (RKHS) formulation for the inhomogeneous Poisson process. We model the square root of the intensity as an RKHS function. The modeling challenge is that the usual representer theorem arguments no longer apply due to the form of the inhomogeneous Poisson process likelihood. However, we prove that the representer theorem does hold in an appropriately transformed RKHS, guaranteeing that the optimization of the penalized likelihood can be cast as a tractable finite-dimensional problem. The resulting approach is simple to implement, and readily scales to high dimensions and large-scale datasets.
@inproceedings{FlaTehSej2017,
author = {Flaxman, S. and Teh, Y. W. and Sejdinovic, D.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {{Poisson Intensity Estimation with Reproducing Kernels}},
year = {2017}
}
Y. W. Teh
,
V. Bapst
,
W. M. Czarnecki
,
J. Quan
,
J. Kirkpatrick
,
R. Hadsell
,
N. Heess
,
R. Pascanu
,
Distral: Robust multitask reinforcement learning, in Advances in Neural Information Processing Systems (NeurIPS), 2017.
Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (Distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a "distilled" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable—attributes that are critical in deep reinforcement learning.
@inproceedings{TehBapCza2017a,
author = {Teh, Y. W. and Bapst, V. and Czarnecki, W. M. and Quan, J. and Kirkpatrick, J. and Hadsell, R. and Heess, N. and Pascanu, R.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Distral: Robust multitask reinforcement learning},
year = {2017}
}
C. J. Maddison
,
D. Lawson
,
G. Tucker
,
N. Heess
,
M. Norouzi
,
A. Mnih
,
A. Doucet
,
Y. W. Teh
,
Particle Value Functions, in ICLR 2017 Workshop Proceedings, 2017.
The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agent’s experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.
@inproceedings{MadLawTuc2017a,
author = {Maddison, C. J. and Lawson, D. and Tucker, G. and Heess, N. and Norouzi, M. and Mnih, A. and Doucet, A. and Teh, Y. W.},
booktitle = {ICLR 2017 Workshop Proceedings},
note = {ArXiv e-prints: 1703.05820},
title = {Particle Value Functions},
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1705.09279v1.pdf}
}
M. Lomeli
,
S. Favaro
,
Y. W. Teh
,
A Marginal Sampler for σ-Stable Poisson-Kingman Mixture Models, Journal of Computational and Graphical Statistics, 2017.
We investigate the class of σ-stable Poisson-Kingman random probability measures (RPMs) in the context of Bayesian nonparametric mixture modeling. This is a large class of discrete RPMs which encompasses most of the popular discrete RPMs used in Bayesian nonparametrics, such as the Dirichlet process, Pitman-Yor process, the normalized inverse Gaussian process and the normalized generalized Gamma process. We show how certain sampling properties and marginal characterisations of σ-stable Poisson-Kingman RPMs can be usefully exploited for devising a Markov chain Monte Carlo (MCMC) algorithm for performing posterior inference with a Bayesian nonparametric mixture model. Specifically, we introduce a novel and efficient MCMC sampling scheme in an augmented space that has a fixed number of auxiliary variables per iteration. We apply our sampling scheme to a density estimation and clustering tasks with unidimensional and multidimensional datasets, and compare it against competing MCMC sampling schemes.
@article{LomFavTeh2015b,
author = {Lomeli, M. and Favaro, S. and Teh, Y. W.},
doi = {10.1080/10618600.2015.1110526},
journal = {Journal of Computational and Graphical Statistics},
title = {A Marginal Sampler for {$\sigma$}-Stable {P}oisson-{K}ingman Mixture Models},
year = {2017},
bdsk-url-1 = {http://www.tandfonline.com/doi/abs/10.1080/10618600.2015.1110526},
bdsk-url-2 = {http://dx.doi.org/10.1080/10618600.2015.1110526},
bdsk-url-3 = {https://arxiv.org/pdf/1407.4211v3.pdf}
}
2016
S. Flaxman
,
D. Sutherland
,
Y. Wang
,
Y. W. Teh
,
Understanding the 2016 US Presidential Election using ecological inference and distribution regression with census microdata, Arxiv e-prints, Nov-2016.
We combine fine-grained spatially referenced census data with the vote outcomes from the 2016 US presidential election. Using this dataset, we perform ecological inference using distribution regression (Flaxman et al, KDD 2015) with a multinomial-logit regression so as to model the vote outcome Trump, Clinton, Other / Didn’t vote as a function of demographic and socioeconomic features. Ecological inference allows us to estimate "exit poll" style results like what was Trump’s support among white women, but for entirely novel categories. We also perform exploratory data analysis to understand which census variables are predictive of voting for Trump, voting for Clinton, or not voting for either. All of our methods are implemented in python and R and are available online for replication.
@unpublished{flaxsuthetal2016,
author = {Flaxman, Seth and Sutherland, Dougal and Wang, Yu-Xiang and Teh, Yee Whye},
title = {Understanding the 2016 US Presidential Election using ecological inference and distribution regression with census microdata},
journal = {Arxiv e-prints},
note = {ArXiv e-prints: 1611.03787},
year = {2016},
month = nov
}
We propose a Bayesian nonparametric prior for time-varying networks. To each node of the network is associated a positive parameter, modeling the sociability of that node. Sociabilities are assumed to evolve over time, and are modeled via a dynamic point process model. The model is able to (a) capture smooth evolution of the interaction between nodes, allowing edges to appear/disappear over time (b) capture long term evolution of the sociabilities of the nodes (c) and yield sparse graphs, where the number of edges grows subquadratically with the number of nodes. The evolution of the sociabilities is described by a tractable time-varying gamma process. We provide some theoretical insights into the model and apply it to three real world datasets.
@unpublished{PallaCaronTeh2016,
author = {Palla, Konstantina and Caron, Francois and Teh, Yee Whye},
title = {A Bayesian nonparametric model for sparse dynamic networks},
note = {ArXiv e-prints: 1607.01624},
archiveprefix = {arXiv},
year = {2016},
month = jun
}
We tackle the problem of collaborative filtering (CF) with side information, through the lens of Gaussian Process (GP) regression. Driven by the idea of using the kernel to explicitly model user-item similarities, we formulate the GP in a way that allows the incorporation of low-rank matrix factorisation, arriving at our model, the Tucker Gaussian Process (TGP). Consequently, TGP generalises classical Bayesian matrix factorisation models, and goes beyond them to give a natural and elegant method for incorporating side information, giving enhanced predictive performance for CF problems. Moreover we show that it is a novel model for regression, especially well-suited to grid-structured data and problems where the dependence on covariates is close to being separable.
@unpublished{kimluflateh16,
title = {Collaborative Filtering with Side Information: a Gaussian Process Perspective},
author = {Kim, H. and Lu, X. and Flaxman, S. and Teh, Y. W.},
note = {ArXiv e-prints: 1605.07025},
year = {2016}
}
J. Mitrovic
,
D. Sejdinovic
,
Y. W. Teh
,
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression, in International Conference on Machine Learning (ICML), 2016, 1482–1491.
Performing exact posterior inference in complex generative models is often difficult or impossible due to an expensive to evaluate or intractable likelihood function. Approximate Bayesian computation (ABC) is an inference framework that constructs an approximation to the true likelihood based on the similarity between the observed and simulated data as measured by a predefined set of summary statistics. Although the choice of appropriate problem-specific summary statistics crucially influences the quality of the likelihood approximation and hence also the quality of the posterior sample in ABC, there are only few principled general-purpose approaches to the selection or construction of such summary statistics. In this paper, we develop a novel framework for this task using kernel-based distribution regression. We model the functional relationship between data distributions and the optimal choice (with respect to a loss function) of summary statistics using kernel-based distribution regression. We show that our approach can be implemented in a computationally and statistically efficient way using the random Fourier features framework for large-scale kernel learning. In addition to that, our framework shows superior performance when compared to related methods on toy and real-world problems.
@inproceedings{MitSejTeh2016,
author = {Mitrovic, J. and Sejdinovic, D. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
pages = {1482--1491},
title = {{DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression}},
year = {2016},
bdsk-url-1 = {http://jmlr.org/proceedings/papers/v48/mitrovic16.html}
}
T. Fernandez
,
Y. W. Teh
,
Posterior Consistency for a Non-parametric Survival Model under a Gaussian Process Prior, 2016.
@unpublished{FerTeh2016a,
author = {Fernandez, T. and Teh, Y. W.},
note = {ArXiv e-prints: 1611.02335},
title = {Posterior Consistency for a Non-parametric Survival Model under a {G}aussian Process Prior},
year = {2016},
bdsk-url-1 = {https://arxiv.org/abs/1611.02335},
bdsk-url-2 = {https://arxiv.org/pdf/1611.02335v1.pdf}
}
T. Fernandez
,
N. Rivera
,
Y. W. Teh
,
Gaussian Processes for Survival Analysis, in Advances in Neural Information Processing Systems (NeurIPS), 2016.
We introduce a semi-parametric Bayesian model for survival analysis. The model is centred on a parametric baseline hazard, and uses a Gaussian process to model variations away from it nonparametrically, as well as dependence on covariates. As opposed to many other methods in survival analysis, our framework does not impose unnecessary constraints in the hazard rate or in the survival function. Furthermore, our model handles left, right and interval censoring mechanisms common in survival analysis. We propose a MCMC algorithm to perform inference and an approximation scheme based on random Fourier features to make computations faster. We report experimental results on synthetic and real data, showing that our model performs better than competing models such as Cox proportional hazards, ANOVA-DDP and random survival forests.
@inproceedings{FerRivTeh2016a,
author = {Fernandez, T. and Rivera, N. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Gaussian Processes for Survival Analysis},
year = {2016},
bdsk-url-1 = {http://papers.nips.cc/paper/6443-gaussian-processes-for-survival-analysis},
bdsk-url-2 = {http://papers.nips.cc/paper/6443-gaussian-processes-for-survival-analysis.pdf}
}
H. Kim
,
Y. W. Teh
,
Scalable Structure Discovery in Regression using Gaussian Processes, in Proceedings of the 2016 Workshop on Automatic Machine Learning, 2016.
Automatic Bayesian Covariance Discovery(ABCD) in Lloyd et. al (2014) provides a framework for automating statistical modelling as well as exploratory data analysis for regression problems. However ABCD does not scale due to its O(N^3) running time. This is undesirable not only because the average size of data sets is growing fast, but also because there is potentially more information in bigger data, implying a greater need for more expressive models that can discover sophisticated structure. We propose a scalable version of ABCD, to encompass big data within the boundaries of automated statistical modelling.
@inproceedings{KimTeh2016a,
author = {Kim, H. and Teh, Y. W.},
booktitle = {Proceedings of the 2016 Workshop on Automatic Machine Learning},
title = {Scalable Structure Discovery in Regression using Gaussian Processes},
year = {2016},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v64/kim_scalable_2016.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v64/kim_scalable_2016.pdf}
}
L. T. Elliott
,
Y. W. Teh
,
A Nonparametric HMM for Genetic Imputation and Coalescent Inference, Electronic Journal of Statistics, 2016.
Genetic sequence data are well described by hidden Markov models (HMMs) in which latent states correspond to clusters of similar mutation patterns. Theory from statistical genetics suggests that these HMMs are nonhomogeneous (their transition probabilities vary along the chromosome) and have large support for self transitions. We develop a new nonparametric model of genetic sequence data, based on the hierarchical Dirichlet process, which supports these self transitions and nonhomogeneity. Our model provides a parameterization of the genetic process that is more parsimonious than other more general nonparametric models which have previously been applied to population genetics. We provide truncation-free MCMC inference for our model using a new auxiliary sampling scheme for Bayesian nonparametric HMMs. In a series of experiments on male X chromosome data from the Thousand Genomes Project and also on data simulated from a population bottleneck we show the benefits of our model over the popular finite model fastPHASE, which can itself be seen as a parametric truncation of our model. We find that the number of HMM states found by our model is correlated with the time to the most recent common ancestor in population bottlenecks. This work demonstrates the flexibility of Bayesian nonparametrics applied to large and complex genetic data.
@article{EllTeh2016a,
author = {Elliott, L. T. and Teh, Y. W.},
journal = {Electronic Journal of Statistics},
title = {A Nonparametric {HMM} for Genetic Imputation and Coalescent Inference},
year = {2016}
}
S. Favaro
,
A. Lijoi
,
C. Nava
,
B. Nipoti
,
I. Prüenster
,
Y. W. Teh
,
On the Stick-Breaking Representation for Homogeneous NRMIs, Bayesian Analysis, vol. 11, 697–724, 2016.
In this paper, we consider homogeneous normalized random measures with independent increments (hNRMI), a class of nonparametric priors recently introduced in the literature. Many of their distributional properties are known by now but their stick-breaking representation is missing. Here we display such a representation, which will feature dependent stick-breaking weights, and then derive explicit versions for noteworthy special cases of hNRMI. Posterior characterizations are also discussed. Finally, we devise an algorithm for slice sampling mixture models based on hNRMIs, which relies on the representation we have obtained, and implement it to analyze real data.
@article{FavLijNav2016a,
author = {Favaro, S. and Lijoi, A. and Nava, C. and Nipoti, B. and Pr\"uenster, I. and Teh, Y. W.},
journal = {Bayesian Analysis},
pages = {697-724},
title = {On the Stick-Breaking Representation for Homogeneous {NRMIs}},
volume = {11},
year = {2016},
bdsk-url-1 = {http://projecteuclid.org/euclid.ba/1440594949},
bdsk-url-2 = {http://projecteuclid.org/download/pdfview_1/euclid.ba/1440594949}
}
Y. W. Teh
,
Bayesian Nonparametric Modelling and the Ubiquitous Ewens Sampling Formula, Statistical Science, vol. 31, no. 1, 34–36, 2016.
We introduce the Mondrian kernel, a fast random feature approximation to the Laplace kernel. It is suitable for both batch and online learning, and admits a fast kernel-width-selection procedure as the random features can be re-used efficiently for all kernel widths. The features are constructed by sampling trees via a Mondrian process [Roy and Teh, 2009], and we highlight the connection to Mondrian forests [Lakshminarayanan et al., 2014], where trees are also sampled via a Mondrian process, but fit independently. This link provides a new insight into the relationship between kernel methods and random forests.
@inproceedings{BalLakGha2016a,
author = {Balog, M. and Lakshminarayanan, B. and Ghahramani, Z. and Roy, D. M. and Teh, Y. W.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {The {M}ondrian Kernel},
year = {2016},
bdsk-url-1 = {http://auai.org/uai2016/proceedings/papers/236.pdf},
bdsk-url-2 = {http://auai.org/uai2016/proceedings/supp/236_supp.pdf}
}
Y. W. Teh
,
A. H. Thiéry
,
S. J. Vollmer
,
Consistency and Fluctuations for Stochastic Gradient Langevin Dynamics, Journal of Machine Learning Research, 2016.
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally expensive. Both the calculation of the acceptance probability and the creation of informed proposals usually require an iteration through the whole data set. The recently proposed stochastic gradient Langevin dynamics (SGLD) method circumvents this problem by generating proposals which are only based on a subset of the data, by skipping the accept-reject step and by using decreasing step-sizes sequence (δ_m)_m≥0. We provide in this article a rigorous mathematical framework for analysing this algorithm. We prove that, under verifiable assumptions, the algorithm is consistent, satisfies a central limit theorem (CLT) and its asymptotic bias-variance decomposition can be characterized by an explicit functional of the step-sizes sequence (δ_m)_m≥0. We leverage this analysis to give practical recommendations for the notoriously difficult tuning of this algorithm: it is asymptotically optimal to use a step-size sequence of the type δ_m≍m^−1/3, leading to an algorithm whose mean squared error (MSE) decreases at rate O(m^−1/3).
@article{TehThiVol2016a,
author = {Teh, Y. W. and Thi\'ery, A. H. and Vollmer, S. J.},
journal = {Journal of Machine Learning Research},
title = {Consistency and Fluctuations for Stochastic Gradient {L}angevin Dynamics},
year = {2016},
bdsk-url-1 = {http://jmlr.org/papers/v17/teh16a.html},
bdsk-url-2 = {http://www.jmlr.org/papers/volume17/teh16a/teh16a.pdf}
}
S. J. Vollmer
,
K. C. Zygalakis
,
Y. W. Teh
,
Exploration of the (Non-)asymptotic Bias and Variance of Stochastic Gradient Langevin Dynamics, Journal of Machine Learning Research (JMLR), 2016.
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally infeasible. The recently proposed stochastic gradient Langevin dynamics (SGLD) method circumvents this problem in three ways: it generates proposed moves using only a subset of the data, it skips the Metropolis- Hastings accept-reject step, and it uses sequences of decreasing step sizes. In Teh et al. (2014), we provided the mathematical foundations for the decreasing step size SGLD, including consistency and a central limit theorem. However, in practice the SGLD is run for a relatively small number of iterations, and its step size is not decreased to zero. The present article investigates the behaviour of the SGLD with fixed step size. In particular we characterise the asymptotic bias explicitly, along with its dependence on the step size and the variance of the stochastic gradient. On that basis a modified SGLD which removes the asymptotic bias due to the variance of the stochastic gradients up to first order in the step size is derived. Moreover, we are able to obtain bounds on the finite-time bias, variance and mean squared error (MSE). The theory is illustrated with a Gaussian toy model for which the bias and the MSE for the estimation of moments can be obtained explicitly. For this toy model we study the gain of the SGLD over the standard Euler method in the limit of large data sets.
@article{VolZygTeh2016a,
author = {Vollmer, S. J. and Zygalakis, K. C. and Teh, Y. W.},
journal = {Journal of Machine Learning Research (JMLR)},
title = {Exploration of the (Non-)asymptotic Bias and Variance of Stochastic Gradient {L}angevin Dynamics},
year = {2016},
bdsk-url-1 = {http://jmlr.org/papers/v17/15-494.html},
bdsk-url-2 = {http://www.jmlr.org/papers/volume17/15-494/15-494.pdf}
}
B. Lakshminarayanan
,
D. M. Roy
,
Y. W. Teh
,
Mondrian Forests for Large-Scale Regression when Uncertainty Matters, in Artificial Intelligence and Statistics (AISTATS), 2016.
Many real-world regression problems demand a measure of the uncertainty associated with each prediction. Standard decision forests deliver efficient state-of-the-art predictive performance, but high-quality uncertainty estimates are lacking. Gaussian processes (GPs) deliver uncertainty estimates, but scaling GPs to large-scale data sets comes at the cost of approximating the uncertainty estimates. We extend Mondrian forests, first proposed by Lakshminarayanan et al. (2014) for classification problems, to the large-scale nonparametric regression setting. Using a novel hierarchical Gaussian prior that dovetails with the Mondrian forest framework, we obtain principled uncertainty estimates, while still retaining the computational advantages of decision forests. Through a combination of illustrative examples, real-world large-scale datasets and Bayesian optimization benchmarks, we demonstrate that Mondrian forests outperform approximate GPs on large-scale regression tasks and deliver better-calibrated uncertainty assessments than decision-forest-based methods.
@inproceedings{LakRoyTeh2016a,
author = {Lakshminarayanan, B. and Roy, D. M. and Teh, Y. W.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {{M}ondrian Forests for Large-Scale Regression when Uncertainty Matters},
year = {2016},
bdsk-url-1 = {http://jmlr.org/proceedings/papers/v51/lakshminarayanan16.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v51/lakshminarayanan16.pdf},
bdsk-url-3 = {http://jmlr.org/proceedings/papers/v51/lakshminarayanan16-supp.pdf}
}
D. Glowacka
,
Y. W. Teh
,
J. Shawe-Taylor
,
Image Retrieval with a Bayesian Model of Relevance Feedback, 2016.
A content-based image retrieval system based on multinomial relevance feedback is proposed. The system relies on an interactive search paradigm where at each round a user is presented with k images and selects the one closest to their ideal target. Two approaches, one based on the Dirichlet distribution and one based the Beta distribution, are used to model the problem motivating an algorithm that trades exploration and exploitation in presenting the images in each round. Experimental results show that the new approach compares favourably with previous work.
@unpublished{GloTehSha2016a,
author = {Glowacka, D. and Teh, Y. W. and Shawe-Taylor, J.},
note = {ArXiv e-prints: 1603.09522},
title = {Image Retrieval with a {B}ayesian Model of Relevance Feedback},
year = {2016},
bdsk-url-1 = {https://arxiv.org/pdf/1603.09522.pdf}
}
We propose a Bayesian nonparametric prior for time-varying networks. To each node of the network is associated a positive parameter, modeling the sociability of that node. Sociabilities are assumed to evolve over time, and are modeled via a dynamic point process model. The model is able to (a) capture smooth evolution of the interaction between nodes, allowing edges to appear/disappear over time (b) capture long term evolution of the sociabilities of the nodes (c) and yield sparse graphs, where the number of edges grows subquadratically with the number of nodes. The evolution of the sociabilities is described by a tractable time-varying gamma process. We provide some theoretical insights into the model and apply it to three real world datasets.
@unpublished{PalCarTeh2016a,
author = {Palla, K. and Caron, F. and Teh, Y. W.},
note = {ArXiv e-prints: 1607.01624},
title = {Bayesian Nonparametrics for Sparse Dynamic Networks},
year = {2016},
bdsk-url-1 = {https://arxiv.org/pdf/1607.01624.pdf}
}
M. Battiston
,
S. Favaro
,
Y. W. Teh
,
Multi-armed bandit for species discovery: A Bayesian nonparametric approach, Journal of the American Statistical Association, 2016.
Let (P1,...,PJ) denote J populations of animals from distinct regions. A priori, it is
unknown which species are present in each region and what are their corresponding frequencies.
Species are shared among populations and each species can be present in more than one region with
its frequency varying across populations. In this paper we consider the problem of sequentially
sampling these populations in order to observe the greatest number of different species. We adopt a
Bayesian nonparametric approach and endow (P1,...,PJ) with a Hierarchical Pitman-Yor process
prior. As a consequence of the hierarchical structure, the J unknown discrete probability measures
share the same support, that of their common random base measure. Given this prior choice,
we propose a sequential rule that, at every time step, given the information available up to that
point, selects the population from which to collect the next observation. Rather than picking the
population with the highest posterior estimate of producing a new value, the proposed rule includes
a Thompson sampling step to better balance the exploration-exploitation trade-off. We also propose
an extension of the algorithm to deal with incidence data, where multiple observations are collected
in a time period. The performance of the proposed algorithms is assessed through a simulation
study and compared to three other strategies. Finally, we compare these algorithms using a dataset
of species of trees, collected from different plots in South America.
@article{BatFavTeh2016a,
author = {Battiston, M. and Favaro, S. and Teh, Y. W.},
doi = {http://dx.doi.org/10.1080/01621459.2016.1261711},
journal = {Journal of the American Statistical Association},
title = {Multi-armed bandit for species discovery: A Bayesian nonparametric approach},
year = {2016},
bdsk-url-1 = {http://www.tandfonline.com/doi/full/10.1080/01621459.2016.1261711},
bdsk-url-2 = {http://www.tandfonline.com/doi/pdf/10.1080/01621459.2016.1261711?needAccess=true}
}
2015
A. G. Deshwar
,
L. Boyles
,
J. Wintersinger
,
P. C. Boutros
,
Y. W. Teh
,
Q. Morris
,
Abstract B2-59: PhyloSpan: using multimutation reads to resolve subclonal architectures from heterogeneous tumor samples, AACR Special Conference on Computational and Systems Biology of Cancer, vol. 75, 2015.
@article{BouBoyDes2015a,
author = {Deshwar, A. G. and Boyles, L. and Wintersinger, J. and Boutros, P. C. and Teh, Y. W. and Morris, Q.},
booktitle = {AACR Special Conference on Computational and Systems Biology of Cancer},
doi = {10.1158/1538-7445.AM2015-4865},
journal = {Cancer Research},
title = {Abstract {B2-59}: {PhyloSpan}: using multimutation reads to resolve subclonal architectures from heterogeneous tumor samples},
volume = {75},
year = {2015},
bdsk-url-1 = {http://cancerres.aacrjournals.org/content/75/22_Supplement_2/B2-59.short},
bdsk-url-2 = {http://dx.doi.org/10.1158/1538-7445.AM2015-4865}
}
S. Favaro
,
B. Nipoti
,
Y. W. Teh
,
Rediscovery of Good-Turing Estimators via Bayesian Nonparametrics, Biometrics, 2015.
The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good–Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good–Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.
@article{FavNipTeh2015b,
author = {Favaro, S. and Nipoti, B. and Teh, Y. W.},
doi = {10.1111/biom.12366},
journal = {Biometrics},
title = {Rediscovery of {Good-Turing} Estimators via {B}ayesian Nonparametrics},
year = {2015},
bdsk-url-1 = {http://onlinelibrary.wiley.com/doi/10.1111/biom.12366/abstract},
bdsk-url-2 = {http://dx.doi.org/10.1111/biom.12366},
bdsk-url-3 = {https://arxiv.org/pdf/1401.0303.pdf}
}
P. G. Moreno
,
A. Artés-Rodríguez
,
Y. W. Teh
,
F. Perez-Cruz
,
Bayesian Nonparametric Crowdsourcing, Journal of Machine Learning Research (JMLR), 2015.
Crowdsourcing has been proven to be an effective and efficient tool to annotate large data-sets. User annotations are often noisy, so methods to combine the annotations to produce reliable estimates of the ground truth are necessary. We claim that considering the existence of clusters of users in this combination step can improve the performance. This is especially important in early stages of crowdsourcing implementations, where the number of annotations is low. At this stage there is not enough information to accurately estimate the bias introduced by each annotator separately, so we have to resort to models that consider the statistical links among them. In addition, finding these clusters is interesting in itself as knowing the behavior of the pool of annotators allows implementing efficient active learning strategies. Based on this, we propose in this paper two new fully unsupervised models based on a Chinese restaurant process (CRP) prior and a hierarchical structure that allows inferring these groups jointly with the ground truth and the properties of the users. Efficient inference algorithms based on Gibbs sampling with auxiliary variables are proposed. Finally, we perform experiments, both on synthetic and real databases, to show the advantages of our models over state-of-the-art algorithms.
@article{MorArtTeh2015a,
author = {Moreno, P. G. and Art\'es-Rodr\'iguez, A. and Teh, Y. W. and Perez-Cruz, F.},
journal = {Journal of Machine Learning Research (JMLR)},
title = {{B}ayesian Nonparametric Crowdsourcing},
year = {2015},
bdsk-url-1 = {http://www.jmlr.org/papers/v16/moreno15a.html},
bdsk-url-2 = {http://www.jmlr.org/papers/volume16/moreno15a/moreno15a.pdf}
}
R. P. Adams
,
E. B. Fox
,
E. B. Sudderth
,
Y. W. Teh
,
Guest Editors’ Introduction to the Special Issue on Bayesian Nonparametrics, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
@article{AdaFoxSud2015a,
author = {Adams, R. P. and Fox, E. B. and Sudderth, E. B. and Teh, Y. W.},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
title = {Guest Editors' Introduction to the Special Issue on Bayesian Nonparametrics},
year = {2015},
bdsk-url-1 = {http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=7004096}
}
M. Lomeli
,
S. Favaro
,
Y. W. Teh
,
A hybrid sampler for Poisson-Kingman mixture models, in Advances in Neural Information Processing Systems (NeurIPS), 2015.
This paper concerns the introduction of a new Markov Chain Monte Carlo scheme for posterior sampling in Bayesian nonparametric mixture models with priors that belong to the general Poisson-Kingman class. We present a novel and compact way of representing the infinite dimensional component of the model such that while explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation results demonstrating the efficacy of the proposed MCMC algorithm against existing marginal and conditional MCMC samplers.
@inproceedings{LomFavTeh2015a,
author = {Lomeli, M. and Favaro, S. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {A hybrid sampler for {Poisson-Kingman} mixture models},
year = {2015},
bdsk-url-1 = {http://papers.nips.cc/paper/5799-a-hybrid-sampler-for-poisson-kingman-mixture-models},
bdsk-url-2 = {http://papers.nips.cc/paper/5799-a-hybrid-sampler-for-poisson-kingman-mixture-models.pdf},
bdsk-url-3 = {http://papers.nips.cc/paper/5799-a-hybrid-sampler-for-poisson-kingman-mixture-models-supplemental.zip}
}
M. De Iorio
,
S. Favaro
,
Y. W. Teh
,
Bayesian Inference on Population Structure: From Parametric to Nonparametric Modeling, in Nonparametric Bayesian Inference in Biostatistics, Springer, 2015.
Making inference on population structure from genotype data requires to identify the actual subpopulations and assign individuals to these populations. The source populations are assumed to be in Hardy-Weinberg equilibrium, but the allelic frequencies of these populations and even the number of populations present in a sample are unknown. In this chapter we present a review of some Bayesian parametric and nonparametric models for making inference on population structure, with emphasis on model-based clustering methods. Our aim is to show how recent developments in Bayesian nonparametrics have been usefully exploited in order to introduce natural nonparametric counterparts of some of the most celebrated parametric approaches for inferring population structure. We use data from the 1000 Genomes project (http://www.1000genomes.org/) to provide a brief illustration of some of these nonparametric approaches.
@incollection{De-FavTeh2015a,
author = {{De Iorio}, M. and Favaro, S. and Teh, Y. W.},
booktitle = {Nonparametric {B}ayesian Inference in Biostatistics},
doi = {10.1007/978-3-319-19518-6_7},
publisher = {Springer},
title = {{B}ayesian Inference on Population Structure: From Parametric to Nonparametric Modeling},
year = {2015},
bdsk-url-1 = {http://link.springer.com/chapter/10.1007/978-3-319-19518-6_7},
bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-319-19518-6_7}
}
T. Lienart
,
Y. W. Teh
,
A. Doucet
,
Expectation Particle Belief Propagation, in Advances in Neural Information Processing Systems (NeurIPS), 2015.
We propose an original particle-based implementation of the Loopy Belief Propagation (LPB) algorithm for pairwise Markov Random Fields (MRF) on a continuous state space. The algorithm constructs adaptively efficient proposal distributions approximating the local beliefs at each note of the MRF. This is achieved by considering proposal distributions in the exponential family whose parameters are updated iterately in an Expectation Propagation (EP) framework. The proposed particle scheme provides consistent estimation of the LBP marginals as the number of particles increases. We demonstrate that it provides more accurate results than the Particle Belief Propagation (PBP) algorithm of Ihler and McAllester (2009) at a fraction of the computational cost and is additionally more robust empirically. The computational complexity of our algorithm at each iteration is quadratic in the number of particles. We also propose an accelerated implementation with sub-quadratic computational complexity which still provides consistent estimates of the loopy BP marginal distributions and performs almost as well as the original procedure.
@inproceedings{LieTehDou2015a,
author = {Lienart, T. and Teh, Y. W. and Doucet, A.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Expectation Particle Belief Propagation},
year = {2015},
bdsk-url-1 = {http://papers.nips.cc/paper/5674-expectation-particle-belief-propagation},
bdsk-url-2 = {http://papers.nips.cc/paper/5674-expectation-particle-belief-propagation.pdf},
bdsk-url-3 = {http://papers.nips.cc/paper/5674-expectation-particle-belief-propagation-supplemental.zip}
}
S. Favaro
,
B. Nipoti
,
Y. W. Teh
,
Random variate generation for Laguerre-type exponentially tilted α-stable distributions, Electronic Journal of Statistics, vol. 9, 1230–1242, 2015.
Exact sampling methods have been recently developed for generating random variates for exponentially tilted α-stable distributions. In this paper we show how to generate, exactly, random variates for a more general class of tilted α-stable distributions, which is referred to as the class of Laguerre-type exponentially tilted α-stable distributions. Beside the exponentially tilted α-stable distribution, such a class includes also the Erlang tilted α-stable distribution. This is a special case of the so-called gamma tilted α-stable distribution, for which an efficient exact random variate generator is currently not available in the literature. Our result fills this gap.
@article{FavNipTeh2015a,
author = {Favaro, S. and Nipoti, B. and Teh, Y. W.},
doi = {10.1214/15-EJS1033},
journal = {Electronic Journal of Statistics},
pages = {1230-1242},
title = {Random variate generation for {L}aguerre-type exponentially tilted {$\alpha$}-stable distributions},
volume = {9},
year = {2015},
bdsk-url-1 = {http://projecteuclid.org/euclid.ejs/1433982944},
bdsk-url-2 = {http://dx.doi.org/10.1214/15-EJS1033},
bdsk-url-3 = {http://projecteuclid.org/download/pdfview_1/euclid.ejs/1433982944}
}
M. Balog
,
Y. W. Teh
,
The Mondrian Process for Machine Learning, 2015.
This report is concerned with the Mondrian process and its applications in machine learning. The Mondrian process is a guillotine-partition-valued stochastic process that possesses an elegant self-consistency property. The first part of the report uses simple concepts from applied probability to define the Mondrian process and explore its properties.
The Mondrian process has been used as the main building block of a clever online random forest classification algorithm that turns out to be equivalent to its batch counterpart. We outline a slight adaptation of this algorithm to regression, as the remainder of the report uses regression as a case study of how Mondrian processes can be utilized in machine learning. In particular, the Mondrian process will be used to construct a fast approximation to the computationally expensive kernel ridge regression problem with a Laplace kernel.
The complexity of random guillotine partitions generated by a Mondrian process and hence the complexity of the resulting regression models is controlled by a lifetime hyperparameter. It turns out that these models can be efficiently trained and evaluated for all lifetimes in a given range at once, without needing to retrain them from scratch for each lifetime value. This leads to an efficient procedure for determining the right model complexity for a dataset at hand.
The limitation of having a single lifetime hyperparameter will motivate the final Mondrian grid model, in which each input dimension is endowed with its own lifetime parameter. In this model we preserve the property that its hyperparameters can be tweaked without needing to retrain the modified model from scratch.
@unpublished{BalTeh2015a,
author = {Balog, M. and Teh, Y. W.},
note = {ArXiv e-prints: 1507.05181},
title = {The {M}ondrian Process for Machine Learning},
year = {2015},
bdsk-url-1 = {https://arxiv.org/pdf/1507.05181.pdf}
}
P. Orbanz
,
L. James
,
Y. W. Teh
,
Scaled subordinators and generalizations of the Indian buffet process, 2015.
We study random families of subsets of ℕ that are similar to exchangeable random partitions, but do not require constituent sets to be disjoint: Each element of ℕ may be contained in multiple subsets. One class of such objects, known as Indian buffet processes, has become a popular tool in machine learning. Based on an equivalence between Indian buffet and scale-invariant Poisson processes, we identify a random scaling variable whose role is similar to that played in exchangeable partition models by the total mass of a random measure. Analogous to the construction of exchangeable partitions from normalized subordinators, random families of sets can be constructed from randomly scaled subordinators. Coupling to a heavy-tailed scaling variable induces a power law on the number of sets containing the first n elements. Several examples, with properties desirable in applications, are derived explicitly. A relationship to exchangeable partitions is made precise as a correspondence between scaled subordinators and Poisson-Kingman measures, generalizing a result of Arratia, Barbour and Tavare on scale-invariant processes.
@unpublished{OrbJamTeh2015a,
author = {Orbanz, P. and James, L. and Teh, Y. W.},
note = {ArXiv e-prints: 1510.07309},
title = {Scaled subordinators and generalizations of the {I}ndian buffet process},
year = {2015},
bdsk-url-1 = {https://arxiv.org/pdf/1510.07309v1.pdf}
}
M. De Iorio
,
L. Elliott
,
S. Favaro
,
Y. W. Teh
,
Bayesian Nonparametric Inference of Population Admixtures, 2015.
We propose a Bayesian nonparametric model to infer population admixture, extending the Hierarchical Dirichlet Process to allow for correlation between loci due to Linkage Disequilibrium. Given multilocus genotype data from a sample of individuals, the model allows inferring classifying individuals as unadmixed or admixed, inferring the number of subpopulations ancestral to an admixed population and the population of origin of chromosomal regions. Our model does not assume any specific mutation process and can be applied to most of the commonly used genetic markers. We present a MCMC algorithm to perform posterior inference from the model and discuss methods to summarise the MCMC output for the analysis of population admixture. We demonstrate the performance of the proposed model in simulations and in a real application, using genetic data from the EDAR gene, which is considered to be ancestry-informative due to well-known variations in allele frequency as well as phenotypic effects across ancestry. The structure analysis of this dataset leads to the identification of a rare haplotype in Europeans.
@unpublished{De-EllFav2015a,
author = {{De Iorio}, M. and Elliott, L. and Favaro, S. and Teh, Y. W.},
note = {ArXiv e-prints: 1503.08278},
title = {{B}ayesian Nonparametric Inference of Population Admixtures},
year = {2015},
bdsk-url-1 = {https://arxiv.org/pdf/1503.08278v1.pdf}
}
B. Lakshminarayanan
,
D. M. Roy
,
Y. W. Teh
,
Particle Gibbs for Bayesian Additive Regression Trees, in Proceedings of the International Conference on Artificial Intelligence and Statistics, 2015.
Additive regression trees are flexible non-parametric models and popular off-the-shelf tools for real-world non-linear regression. In application domains, such as bioinformatics, where there is also demand for probabilistic predictions with measures of uncertainty, the Bayesian additive regression trees (BART) model, introduced by Chipman et al. (2010), is increasingly popular. As data sets have grown in size, however, the standard Metropolis–Hastings algorithms used to per- form inference in BART are proving inadequate. In particular, these Markov chains make local changes to the trees and suffer from slow mixing when the data are high- dimensional or the best-fitting trees are more than a few layers deep. We present a novel sampler for BART based on the Particle Gibbs (PG) algorithm (Andrieu et al., 2010) and a top-down particle filtering algorithm for Bayesian decision trees (Lakshminarayanan et al., 2013). Rather than making local changes to individual trees, the PG sampler proposes a complete tree to fit the residual. Experiments show that the PG sampler outperforms existing samplers in many settings.
@inproceedings{LakRoyTeh2015b,
author = {Lakshminarayanan, B. and Roy, D. M. and Teh, Y. W.},
booktitle = {Proceedings of the International Conference on Artificial Intelligence and Statistics},
title = {Particle {G}ibbs for {B}ayesian Additive Regression Trees},
year = {2015},
bdsk-url-1 = {http://jmlr.org/proceedings/papers/v38/lakshminarayanan15.html},
bdsk-url-2 = {http://jmlr.org/proceedings/papers/v38/lakshminarayanan15.pdf},
bdsk-url-3 = {http://jmlr.org/proceedings/papers/v38/lakshminarayanan15-supp.pdf}
}
2014
M. Welling
,
Y. W. Teh
,
C. Andrieu
,
J. Kominiarczuk
,
T. Meeds
,
B. Shahbaba
,
S. Vollmer
,
Bayesian Inference and Big Data: A Snapshot from a Workshop, ISBA Bulletin, 2014.
@article{WelTehAnd2014a,
author = {Welling, M. and Teh, Y. W. and Andrieu, C. and Kominiarczuk, J. and Meeds, T. and Shahbaba, B. and Vollmer, S.},
journal = {ISBA Bulletin},
title = {Bayesian Inference and Big Data: A Snapshot from a Workshop},
year = {2014},
bdsk-url-1 = {https://bayesian.org/sites/default/files/fm/bulletins/1412.pdf}
}
M. Xu
,
B. Lakshminarayanan
,
Y. W. Teh
,
J. Zhu
,
B. Zhang
,
Distributed Bayesian Posterior Sampling via Moment Sharing, in Advances in Neural Information Processing Systems, 2014.
We propose a distributed Markov chain Monte Carlo (MCMC) inference algorithm for large scale Bayesian posterior simulation. We assume that the dataset is partitioned and stored across nodes of a cluster. Our procedure involves an independent MCMC posterior sampler at each node based on its local partition of the data. Moment statistics of the local posteriors are collected from each sampler and propagated across the cluster using expectation propagation message passing with low communication costs. The moment sharing scheme improves posterior estimation quality by enforcing agreement among the samplers. We demonstrate the speed and inference quality of our method with empirical studies on Bayesian logistic regression and sparse linear regression with a spike-and-slab prior.
@inproceedings{XuLakTeh2014a,
author = {Xu, M. and Lakshminarayanan, B. and Teh, Y. W. and Zhu, J. and Zhang, B.},
booktitle = {Advances in Neural Information Processing Systems},
title = {Distributed {B}ayesian Posterior Sampling via Moment Sharing},
year = {2014},
bdsk-url-1 = {http://papers.nips.cc/paper/5596-distributed-bayesian-posterior-sampling-via-moment-sharing},
bdsk-url-2 = {http://papers.nips.cc/paper/5596-distributed-bayesian-posterior-sampling-via-moment-sharing.pdf},
bdsk-url-3 = {https://github.com/BigBayes/SMS}
}
S. Favaro
,
M. Lomeli
,
Y. W. Teh
,
On a Class of σ-stable Poisson-Kingman Models and an Effective Marginalized Sampler, Statistics and Computing, 2014.
We investigate the use of a large class of discrete random probability measures, which is referred to as the class Q, in the context of Bayesian nonparametric mixture modeling. The class Q encompasses both the the two-parameter Poisson–Dirichlet process and the normalized generalized Gamma process, thus allowing us to comparatively study the inferential advantages of these two well-known nonparametric priors. Apart from a highly flexible parameterization, the distinguishing feature of the class Q is the availability of a tractable posterior distribution. This feature, in turn, leads to derive an efficient marginal MCMC algorithm for posterior sampling within the framework of mixture models. We demonstrate the efficacy of our modeling framework on both one-dimensional and multi-dimensional datasets.
@article{FavLomTeh2014a,
author = {Favaro, S. and Lomeli, M. and Teh, Y. W.},
doi = {10.1007/s11222-014-9499-4},
journal = {Statistics and Computing},
title = {On a Class of {$\sigma$}-stable {P}oisson-{K}ingman Models and an Effective Marginalized Sampler},
year = {2014},
bdsk-url-1 = {http://projecteuclid.org/euclid.ejs/1433982944},
bdsk-url-2 = {http://dx.doi.org/10.1007/s11222-014-9499-4}
}
T. Herlau
,
M. Mörup
,
Y. W. Teh
,
M. N. Schmidt
,
Adaptive Reconfiguration Moves for Dirichlet Mixtures, submitted, 2014.
Bayesian mixture models are widely applied for unsupervised learning and exploratory data analysis. Markov chain Monte Carlo based on Gibbs sampling and split-merge moves are widely used for inference in these models. However, both methods are restricted to limited types of transitions and suffer from torpid mixing and low accept rates even for problems of modest size. We propose a method that considers a broader range of transitions that are close to equilibrium by exploiting multiple chains in parallel and using the past states adaptively to inform the proposal distribution. The method significantly improves on Gibbs and split-merge sampling as quantified using convergence diagnostics and acceptance rates. Adaptive MCMC methods which use past states to inform the proposal distribution has given rise to many ingenious sampling schemes for continuous problems and the present work can be seen as an important first step in bringing these benefits to partition-based problems
@article{HerMorTeh2014a,
author = {Herlau, T. and M\"orup, M. and Teh, Y. W. and Schmidt, M. N.},
journal = {submitted},
title = {Adaptive Reconfiguration Moves for Dirichlet Mixtures},
year = {2014},
bdsk-url-1 = {https://arxiv.org/pdf/1406.0071v1.pdf}
}
S. Favaro
,
M. Lomeli
,
B. Nipoti
,
Y. W. Teh
,
On the Stick-Breaking Representation of σ-stable Poisson-Kingman Models, Electronic Journal of Statistics, vol. 8, 1063–1085, 2014.
In this paper we investigate the stick-breaking representation for the class of σ-stable Poisson-Kingman models, also known as Gibbs-type random probability measures. This class includes as special cases most of the discrete priors commonly used in Bayesian nonparametrics, such as the two parameter Poisson-Dirichlet process and the normalized generalized Gamma process. Under the assumption σ=u/v, for any coprime integers 1≤u<v such that u/v≤1/2, we show that a σ-stable Poisson-Kingman model admits an explicit stick-breaking representation in terms of random variables which are obtained by suitably transforming Gamma random variables and products of independent Beta and Gamma random variables.
@article{FavLomNip2014a,
author = {Favaro, S. and Lomeli, M. and Nipoti, B. and Teh, Y. W.},
doi = {10.1214/14-EJS921},
journal = {Electronic Journal of Statistics},
pages = {1063-1085},
title = {On the Stick-Breaking Representation of {$\sigma$}-stable {P}oisson-{K}ingman Models},
volume = {8},
year = {2014},
bdsk-url-1 = {http://projecteuclid.org/euclid.ejs/1407243242},
bdsk-url-2 = {http://dx.doi.org/10.1214/14-EJS921},
bdsk-url-3 = {http://projecteuclid.org/download/pdfview_1/euclid.ejs/1407243242}
}
B. Lakshminarayanan
,
D. Roy
,
Y. W. Teh
,
Mondrian Forests: Efficient Online Random Forests, in Advances in Neural Information Processing Systems (NeurIPS), 2014.
@inproceedings{LakRoyTeh2014a,
author = {Lakshminarayanan, B. and Roy, D. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {{M}ondrian Forests: Efficient Online Random Forests},
year = {2014},
bdsk-url-1 = {http://papers.nips.cc/paper/5234-mondrian-forests-efficient-online-random-forests},
bdsk-url-2 = {http://papers.nips.cc/paper/5234-mondrian-forests-efficient-online-random-forests.pdf},
bdsk-url-3 = {http://papers.nips.cc/paper/5234-mondrian-forests-efficient-online-random-forests-supplemental.zip},
bdsk-url-4 = {https://github.com/balajiln/mondrianforest}
}
B. Paige
,
F. Wood
,
A. Doucet
,
Y. W. Teh
,
Asynchronous Anytime Sequential Monte Carlo, in Advances in Neural Information Processing Systems (NeurIPS), 2014.
We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional sequential Monte Carlo algorithms that is amenable to parallel and distributed implementations. It uses no barrier synchronizations which leads to improved particle throughput and memory efficiency. It is an anytime algorithm in the sense that it can be run forever to emit an unbounded number of particles while keeping within a fixed memory budget. We prove that the particle cascade provides an unbiased marginal likelihood estimator which can be straightforwardly plugged into existing pseudo-marginal methods.
@inproceedings{PaiWooDou2014a,
author = {Paige, B. and Wood, F. and Doucet, A. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Asynchronous Anytime Sequential {M}onte {C}arlo},
year = {2014},
bdsk-url-1 = {http://papers.nips.cc/paper/5450-asynchronous-anytime-sequential-monte-carlo},
bdsk-url-2 = {http://papers.nips.cc/paper/5450-asynchronous-anytime-sequential-monte-carlo.pdf},
bdsk-url-3 = {http://papers.nips.cc/paper/5450-asynchronous-anytime-sequential-monte-carlo-supplemental.zip}
}
F. Caron
,
Y. W. Teh
,
B. T. Murphy
,
Bayesian Nonparametric Plackett-Luce Models for the Analysis of Preferences for College Degree Programmes, Annals of Applied Statistics, vol. 8, no. 2, 1145–1181, 2014.
In this paper we propose a Bayesian nonparametric model for clustering partial ranking data. We start by developing a Bayesian nonparametric extension of the popular Plackett–Luce choice model that can handle an infinite number of choice items. Our framework is based on the theory of random atomic measures, with the prior specified by a completely random measure. We characterise the posterior distribution given data, and derive a simple and effective Gibbs sampler for posterior simulation. We then develop a Dirichlet process mixture extension of our model and apply it to investigate the clustering of preferences for college degree programmes amongst Irish secondary school graduates. The existence of clusters of applicants who have similar preferences for degree programmes is established and we determine that subject matter and geographical location of the third level institution characterise these clusters.
@article{CarTehMur2014a,
author = {Caron, F. and Teh, Y. W. and Murphy, B. T.},
doi = {10.1214/14-AOAS717},
journal = {Annals of Applied Statistics},
number = {2},
pages = {1145-1181},
title = {{B}ayesian Nonparametric {P}lackett-{L}uce Models for the Analysis of Preferences for College Degree Programmes},
volume = {8},
year = {2014},
bdsk-url-1 = {https://projecteuclid.org/euclid.aoas/1404229529},
bdsk-url-2 = {http://dx.doi.org/10.1214/14-AOAS717},
bdsk-url-3 = {https://projecteuclid.org/download/pdfview_1/euclid.aoas/1404229529},
bdsk-url-4 = {http://www.stats.ox.ac.uk/\\~{}caron/code/bnppl/index.html}
}
2013
S. Favaro
,
Y. W. Teh
,
MCMC for Normalized Random Measure Mixture Models, Statistical Science, vol. 28, no. 3, 335–359, 2013.
This paper concerns the use of Markov chain Monte Carlo methods for posterior sampling in Bayesian nonparametric mixture models with normalized random measure priors. Making use of some recent posterior characterizations for the class of normalized random measures, we propose novel Markov chain Monte Carlo methods of both marginal type and conditional type. The proposed marginal samplers are generalizations of Neal’s well-regarded Algorithm 8 for Dirichlet process mixture models, whereas the conditional sampler is a variation of those recently introduced in the literature. For both the marginal and conditional methods, we consider as a running example a mixture model with an underlying normalized generalized Gamma process prior, and describe comparative simulation results demonstrating the efficacies of the proposed methods.
@article{FavTeh2013a,
author = {Favaro, S. and Teh, Y. W.},
journal = {Statistical Science},
number = {3},
pages = {335-359},
title = {{MCMC} for Normalized Random Measure Mixture Models},
volume = {28},
year = {2013},
bdsk-url-1 = {https://projecteuclid.org/euclid.ss/1377696940},
bdsk-url-2 = {https://projecteuclid.org/download/pdfview_1/euclid.ss/1377696940}
}
B. Lakshminarayanan
,
D. Roy
,
Y. W. Teh
,
Top-down Particle Filtering for Bayesian Decision Trees, in International Conference on Machine Learning (ICML), 2013.
Decision tree learning is a popular approach for classification and regression in machine learning and statistics, and Bayesian formulations—which introduce a prior distribution over decision trees, and formulate learning as posterior inference given data—have been shown to produce competitive performance. Unlike classic decision tree learning algorithms like ID3, C4.5 and CART, which work in a top-down manner, existing Bayesian algorithms produce an approximation to the posterior distribution by evolving a complete tree (or collection thereof) iteratively via local Monte Carlo modifications to the structure of the tree, e.g., using Markov chain Monte Carlo (MCMC). We present a sequential Monte Carlo (SMC) algorithm that instead works in a top-down manner, mimicking the behavior and speed of classic algorithms. We demonstrate empirically that our approach delivers accuracy comparable to the most popular MCMC method, but operates more than an order of magnitude faster, and thus represents a better computation-accuracy tradeoff.
@inproceedings{LakRoyTeh2013a,
author = {Lakshminarayanan, B. and Roy, D. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {Top-down Particle Filtering for {B}ayesian Decision Trees},
year = {2013},
bdsk-url-1 = {http://jmlr.csail.mit.edu/proceedings/papers/v28/lakshminarayanan13.pdf},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v28/lakshminarayanan13-supp.pdf},
bdsk-url-3 = {http://www.gatsby.ucl.ac.uk/\\~{}balaji/treesmc/}
}
C. Blundell
,
Y. W. Teh
,
Bayesian Hierarchical Community Discovery, in Advances in Neural Information Processing Systems (NeurIPS), 2013.
We propose an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks. Our model is a tree-structured mixture of potentially exponentially many stochastic blockmodels. We describe a family of greedy agglomerative model selection algorithms whose worst case scales quadratically in the number of vertices of the network, but independent of the number of communities. Our algorithms are two orders of magnitude faster than the infinite relational model, achieving comparable or better accuracy.
@inproceedings{BluTeh2013a,
author = {Blundell, C. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {{B}ayesian Hierarchical Community Discovery},
year = {2013},
bdsk-url-1 = {https://papers.nips.cc/paper/5048-bayesian-hierarchical-community-discovery},
bdsk-url-2 = {https://papers.nips.cc/paper/5048-bayesian-hierarchical-community-discovery.pdf}
}
V. Rao
,
Y. W. Teh
,
Fast MCMC sampling for Markov jump processes and extensions, Journal of Machine Learning Research (JMLR), vol. 14, 3295–3320, 2013.
Markov jump processes (or continuous-time Markov chains) are a simple and important class of continuous-time dynamical systems. In this paper, we tackle the problem of simulating from the posterior distribution over paths in these models, given partial and noisy observations. Our approach is an auxiliary variable Gibbs sampler, and is based on the idea of uniformization. This sets up a Markov chain over paths by alternately sampling a finite set of virtual jump times given the current path, and then sampling a new path given the set of extant and virtual jump times. The first step involves simulating a piecewise-constant inhomogeneous Poisson process, while for the second, we use a standard hidden Markov model forward filtering-backward sampling algorithm. Our method is exact and does not involve approximations like time- discretization. We demonstrate how our sampler extends naturally to MJP-based models like Markov-modulated Poisson processes and continuous-time Bayesian networks, and show significant computational benefits over state-of-the-art MCMC samplers for these models.
@article{RaoTeh2013a,
author = {Rao, V. and Teh, Y. W.},
journal = {Journal of Machine Learning Research (JMLR)},
pages = {3295-3320},
title = {Fast {MCMC} sampling for {M}arkov jump processes and extensions},
volume = {14},
year = {2013},
bdsk-url-1 = {http://jmlr.org/papers/v14/rao13a.html},
bdsk-url-2 = {http://jmlr.org/papers/volume14/rao13a/rao13a.pdf}
}
S. Patterson
,
Y. W. Teh
,
Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex, in Advances in Neural Information Processing Systems (NeurIPS), 2013.
In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied online. We apply this method to latent Dirichlet allocation in an online setting, and demonstrate that it achieves substantial performance improvements to the state of the art online variational Bayesian methods.
@inproceedings{PatTeh2013a,
author = {Patterson, S. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Stochastic Gradient {R}iemannian {L}angevin Dynamics on the Probability Simplex},
year = {2013},
bdsk-url-1 = {https://papers.nips.cc/paper/4883-stochastic-gradient-riemannian-langevin-dynamics-on-the-probability-simplex},
bdsk-url-2 = {https://papers.nips.cc/paper/4883-stochastic-gradient-riemannian-langevin-dynamics-on-the-probability-simplex.pdf},
bdsk-url-3 = {https://papers.nips.cc/paper/4883-stochastic-gradient-riemannian-langevin-dynamics-on-the-probability-simplex-supplemental.zip},
bdsk-url-4 = {https://github.com/BigBayes/SGRLD}
}
X. Zhang
,
W. S. Lee
,
Y. W. Teh
,
Learning with Invariances via Linear Functionals on Reproducing Kernel Hilbert Space, in Advances in Neural Information Processing Systems, 2013.
Incorporating invariance information is important for many learning problems. To exploit invariances, most existing methods resort to approximations that either lead to expensive optimization problems such as semi-definite programming, or rely on separation oracles to retain tractability. Some methods further limit the space of functions and settle for non-convex models. In this paper, we propose a framework for learning in reproducing kernel Hilbert spaces (RKHS) using local invariances that explicitly characterize the behavior of the target function around data instances. These invariances are \emphcompactly encoded as linear functionals whose value are penalized by some loss function. Based on a representer theorem that we establish, our formulation can be efficiently optimized via a convex program. For the representer theorem to hold, the linear functionals are required to be bounded in the RKHS, and we show that this is true for a variety of commonly used RKHS and invariances. Experiments on learning with unlabeled data and transform invariances show that the proposed method yields better or similar results compared with the state of the art.
@inproceedings{ZhaLeeTeh2013a,
author = {Zhang, X. and Lee, W. S. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems},
title = {Learning with Invariances via Linear Functionals on Reproducing Kernel {H}ilbert Space},
year = {2013},
bdsk-url-1 = {http://papers.nips.cc/paper/4895-learning-with-invariance-via-linear-functionals-on-reproducing-kernel-hilbert-space},
bdsk-url-2 = {http://papers.nips.cc/paper/4895-learning-with-invariance-via-linear-functionals-on-reproducing-kernel-hilbert-space.pdf},
bdsk-url-3 = {http://papers.nips.cc/paper/4895-learning-with-invariance-via-linear-functionals-on-reproducing-kernel-hilbert-space-supplemental.zip}
}
C. Chen
,
V. A. Rao
,
W. Buntine
,
Y. W. Teh
,
Dependent Normalized Random Measures, in International Conference on Machine Learning (ICML), 2013.
@inproceedings{CheRaoBun2013a,
author = {Chen, C. and Rao, V. A. and Buntine, W. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {Dependent Normalized Random Measures},
year = {2013},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v28/chen13i.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v28/chen13i.pdf},
bdsk-url-3 = {http://www.jmlr.org/proceedings/papers/v28/chen13i-supp.pdf}
}
B. Lakshminarayanan
,
Y. W. Teh
,
Inferring Ground Truth from Multi-annotator Ordinal Data: A Probabilistic Approach, 2013.
A popular approach for large scale data annotation tasks is crowdsourcing, wherein each data point is labeled by multiple noisy annotators. We consider the problem of inferring ground truth from noisy ordinal labels obtained from multiple annotators of varying and unknown expertise levels. Annotation models for ordinal data have been proposed mostly as extensions of their binary/categorical counterparts and have received little attention in the crowdsourcing literature. We propose a new model for crowdsourced ordinal data that accounts for instance difficulty as well as annotator expertise, and derive a variational Bayesian inference algorithm for parameter estimation. We analyze the ordinal extensions of several state-of-the-art annotator models for binary/categorical labels and evaluate the performance of all the models on two real world datasets containing ordinal query-URL relevance scores, collected through Amazon’s Mechanical Turk. Our results indicate that the proposed model performs better or as well as existing state-of-the-art methods and is more resistant to ‘spammy’ annotators (i.e., annotators who assign labels randomly without actually looking at the instance) than popular baselines such as mean, median, and majority vote which do not account for annotator expertise.
@unpublished{LakTeh2013a,
author = {Lakshminarayanan, B. and Teh, Y. W.},
note = {ArXiv e-prints: 1305.0015},
title = {Inferring Ground Truth from Multi-annotator Ordinal Data: A Probabilistic Approach},
year = {2013},
bdsk-url-1 = {https://arxiv.org/pdf/1305.0015.pdf}
}
2012
F. Caron
,
Y. W. Teh
,
Bayesian Nonparametric Models for Ranked Data, in Advances in Neural Information Processing Systems (NeurIPS), 2012.
We develop a Bayesian nonparametric extension of the popular Plackett-Luce choice model that can handle an infinite number of choice items. Our framework is based on the theory of random atomic measures, with the prior specified by a gamma process. We derive a posterior characterization and a simple and effective Gibbs sampler for posterior simulation. We then develop a time-varying extension of our model, and apply our model to the New York Times lists of weekly bestselling books.
@inproceedings{CarTeh2012a,
author = {Caron, F. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Bayesian Nonparametric Models for Ranked Data},
year = {2012},
bdsk-url-1 = {http://papers.nips.cc/paper/4624-bayesian-nonparametric-models-for-ranked-data},
bdsk-url-2 = {http://papers.nips.cc/paper/4624-bayesian-nonparametric-models-for-ranked-data.pdf},
bdsk-url-3 = {http://papers.nips.cc/paper/4624-bayesian-nonparametric-models-for-ranked-data-supplemental.zip}
}
B. Alexe
,
N. Heess
,
Y. W. Teh
,
V. Ferrari
,
Searching for Objects Driven by Context, in Advances in Neural Information Processing Systems (NeurIPS), 2012.
The dominant visual search paradigm for object class detection is sliding windows. Although simple and effective, it is also wasteful, unnatural and rigidly hardwired. We propose strategies to search for objects which intelligently explore the space of windows by making sequential observations at locations decided based on previous observations. Our strategies adapt to the class being searched and to the content of a particular test image. Their driving force is exploiting context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. In addition to being more elegant than sliding windows, we demonstrate experimentally on the PASCAL VOC 2010 dataset that our strategies evaluate two orders of magnitude fewer windows while at the same time achieving higher detection accuracy.
@inproceedings{AleHeeTeh2012a,
author = {Alexe, Bogdan and Heess, Nicolas and Teh, Yee Whye and Ferrari, Vittorio},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Searching for Objects Driven by Context},
year = {2012},
bdsk-url-1 = {https://papers.nips.cc/paper/4717-searching-for-objects-driven-by-context},
bdsk-url-2 = {https://papers.nips.cc/paper/4717-searching-for-objects-driven-by-context.pdf}
}
V. Rao
,
Y. W. Teh
,
MCMC for Continuous-Time Discrete-State Systems, in Advances in Neural Information Processing Systems (NeurIPS), 2012.
We propose a simple and novel framework for MCMC inference in continuous-time discrete-state systems with pure jump trajectories. We construct an exact MCMC sampler for such systems by alternately sampling a random discretization of time given a trajectory of the system, and then a new trajectory given the discretization. The first step can be performed efficiently using properties of the Poisson process, while the second step can avail of discrete-time MCMC techniques based on the forward-backward algorithm. We compare our approach to particle MCMC and a uniformization-based sampler, and show its advantages.
@inproceedings{RaoTeh2012a,
author = {Rao, V. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {{MCMC} for Continuous-Time Discrete-State Systems},
year = {2012},
bdsk-url-1 = {https://papers.nips.cc/paper/4746-mcmc-for-continuous-time-discrete-state-systems},
bdsk-url-2 = {https://papers.nips.cc/paper/4746-mcmc-for-continuous-time-discrete-state-systems.pdf},
bdsk-url-3 = {https://papers.nips.cc/paper/4746-mcmc-for-continuous-time-discrete-state-systems-supplemental.zip}
}
A. Mnih
,
Y. W. Teh
,
A Fast and Simple Algorithm for Training Neural Probabilistic Language Models, in International Conference on Machine Learning (ICML), 2012.
Neural probabilistic language models (NPLMs) have recently superseded smoothed n-gram models as the best-performing model class for language modelling. Unfortunately, the adoption of NPLMs is held back by their notoriously long training times, which can be measured in weeks even for moderately-sized datasets. These are a consequence of the models being explicitly normalized, which leads to having to consider all words in the vocabulary when computing the log-likelihood gradients. We propose a fast and simple algorithm for training NPLMs based on noise-contrastive estimation, a newly introduced procedure for estimating unnormalized continuous distributions. We investigate the behaviour of the algorithm on the Penn Treebank corpus and show that it reduces the training times by more than an order of magnitude without affecting the quality of the resulting models. The algorithm is also more efficient and much more stable than importance sampling because it requires far fewer noise samples to perform well. We demonstrate the scalability of the proposed approach by training several neural language models on a 47M-word corpus with a 80K-word vocabulary, obtaining state-of-the-art results in the Microsoft Research Sentence Completion Challenge.
@inproceedings{MniTeh2012a,
author = {Mnih, A. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {A Fast and Simple Algorithm for Training Neural Probabilistic Language Models},
year = {2012},
bdsk-url-1 = {http://icml.cc/2012/papers/855.pdf}
}
N. Heess
,
D. Silver
,
Y. W. Teh
,
Actor-Critic Reinforcement Learning with Energy-Based Policies, in JMLR Workshop and Conference Proceedings: EWRL 2012, 2012.
We consider reinforcement learning in Markov decision processes with high dimensional state and action spaces. We parametrize policies using energy-based models (particularly restricted Boltzmann machines), and train them using policy gradient learning. Our approach builds upon Sallans and Hinton (2004), who parameterized value functions using energy-based models, trained using a non-linear variant of temporal-difference (TD) learning. Unfortunately, non-linear TD is known to diverge in theory and practice. We introduce the first sound and efficient algorithm for training energy-based policies, based on an actor-critic architecture. Our algorithm is computationally efficient, converges close to a local optimum, and outperforms Sallans and Hinton (2004) in several high dimensional domains.
@inproceedings{HeeSilTeh2012a,
author = {Heess, N. and Silver, D. and Teh, Y. W.},
booktitle = {JMLR Workshop and Conference Proceedings: EWRL 2012},
title = {Actor-Critic Reinforcement Learning with Energy-Based Policies},
year = {2012},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v24/heess12a.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v24/heess12a/heess12a.pdf}
}
A. Mnih
,
Y. W. Teh
,
Learning Label Trees for Probabilistic Modelling of Implicit Feedback, in Advances in Neural Information Processing Systems (NeurIPS), 2012.
@inproceedings{MniTeh2012b,
author = {Mnih, A. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Learning Label Trees for Probabilistic Modelling of Implicit Feedback},
year = {2012},
bdsk-url-1 = {https://papers.nips.cc/paper/4620-learning-label-trees-for-probabilistic-modelling-of-implicit-feedback},
bdsk-url-2 = {https://papers.nips.cc/paper/4620-learning-label-trees-for-probabilistic-modelling-of-implicit-feedback.pdf},
bdsk-url-3 = {https://www.cs.toronto.edu/\\~{}amnih/posters/cftree_poster.pdf}
}
L. Elliott
,
Y. W. Teh
,
Scalable Imputation of Genetic Data with a Discrete Fragmentation-Coagulation Process, in Advances in Neural Information Processing Systems (NeurIPS), 2012.
We present a Bayesian nonparametric model for genetic sequence data in which a set of genetic sequences is modelled using a Markov model of partitions. The partitions at consecutive locations in the genome are related by their clusters first splitting and then merging. Our model can be thought of as a discrete time analogue of continuous time fragmentation-coagulation processes [Teh et al 2011], preserving the important properties of projectivity, exchangeability and reversibility, while being more scalable. We apply this model to the problem of genotype imputation, showing improved computational efficiency while maintaining the same accuracies as in [Teh et al 2011].
@inproceedings{EllTeh2012a,
author = {Elliott, L. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Scalable Imputation of Genetic Data with a Discrete Fragmentation-Coagulation Process},
year = {2012},
bdsk-url-1 = {http://papers.nips.cc/paper/4782-scalable-imputation-of-genetic-data-with-a-discrete-fragmentation-coagulation-process},
bdsk-url-2 = {http://papers.nips.cc/paper/4782-scalable-imputation-of-genetic-data-with-a-discrete-fragmentation-coagulation-process.pdf}
}
2011
V. Rao
,
Y. W. Teh
,
Gaussian Process Modulated Renewal Processes, in Advances in Neural Information Processing Systems (NeurIPS), 2011.
Renewal processes are generalizations of the Poisson process on the real line, whose intervals are drawn i.i.d. from some distribution. Modulated renewal processes allow these distributions to vary with time, allowing the introduction nonstationarity. In this work, we take a nonparametric Bayesian approach, modeling this nonstationarity with a Gaussian process. Our approach is based on the idea of uniformization, allowing us to draw exact samples from an otherwise intractable distribution. We develop a novel and efficient MCMC sampler for posterior inference. In our experiments, we test these on a number of synthetic and real datasets.
@inproceedings{RaoTeh2011b,
author = {Rao, V. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Gaussian Process Modulated Renewal Processes},
year = {2011},
bdsk-url-1 = {https://papers.nips.cc/paper/4358-gaussian-process-modulated-renewal-processes},
bdsk-url-2 = {https://papers.nips.cc/paper/4358-gaussian-process-modulated-renewal-processes.pdf},
bdsk-url-3 = {https://papers.nips.cc/paper/4358-gaussian-process-modulated-renewal-processes-supplemental.zip}
}
V. Rao
,
Y. W. Teh
,
Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks, in Uncertainty in Artificial Intelligence (UAI), 2011.
Markov jump processes and continuous time Bayesian networks are important classes of continuous time dynamical systems. In this paper, we tackle the problem of inferring unobserved paths in these models by introducing a fast auxiliary variable Gibbs sampler. Our approach is based on the idea of uniformization, and sets up a Markov chain over paths by sampling a finite set of virtual jump times and then running a standard hidden Markov model forward filtering-backward sampling algorithm over states at the set of extant and virtual jump times. We demonstrate significant computational benefits over a state-of-the-art Gibbs sampler on a number of continuous time Bayesian networks.
@inproceedings{RaoTeh2011a,
author = {Rao, V. and Teh, Y. W.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {Fast {MCMC} sampling for {M}arkov jump processes and continuous time {B}ayesian networks},
year = {2011},
bdsk-url-1 = {https://arxiv.org/pdf/1202.3760v1.pdf}
}
D. Görür
,
Y. W. Teh
,
Concave-Convex Adaptive Rejection Sampling, Journal of Computational and Graphical Statistics, 2011.
We describe a method for generating independent samples from univariate density functions using adaptive rejection sampling without the log-concavity requirement. The method makes use of the fact that many functions can be expressed as a sum of concave and convex functions. Using a concave-convex decomposition, we bound the log-density by separately bounding the concave and convex parts using piecewise linear functions. The upper bound can then be used as the proposal distribution in rejection sampling. We demonstrate the applicability of the concave-convex approach on a number of standard distributions and describe an application to the efficient construction of sequential Monte Carlo proposal distributions for inference over genealogical trees. Computer code for the proposed algorithms is available online.
@article{GorTeh2011a,
author = {{G\"or\"ur}, D. and Teh, Y. W.},
doi = {10.1198/jcgs.2011.09058},
journal = {Journal of Computational and Graphical Statistics},
title = {Concave-Convex Adaptive Rejection Sampling},
year = {2011},
bdsk-url-1 = {http://amstat.tandfonline.com/doi/abs/10.1198/jcgs.2011.09058},
bdsk-url-2 = {http://dx.doi.org/10.1198/jcgs.2011.09058}
}
C. Blundell
,
Y. W. Teh
,
K. A. Heller
,
Discovering Non-binary Hierarchical Structures with Bayesian Rose Trees, in Mixture Estimation and Applications, C. P. Robert, K. Mengersen, and M. Titterington, Eds. John Wiley & Sons, 2011.
@incollection{BluTehHel2011a,
author = {Blundell, C. and Teh, Y. W. and Heller, K. A.},
booktitle = {Mixture Estimation and Applications},
editor = {Robert, C. P. and Mengersen, K. and Titterington, M.},
publisher = {John Wiley \& Sons},
title = {Discovering Non-binary Hierarchical Structures with {B}ayesian Rose Trees},
year = {2011},
bdsk-url-1 = {http://eu.wiley.com/WileyCDA/WileyTitle/productCd-111999389X.html},
bdsk-url-2 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/BluTehHel2011a.pdf},
bdsk-url-3 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/brt.pdf}
}
R. Silva
,
C. Blundell
,
Y. W. Teh
,
Mixed Cumulative Distribution Networks, in Artificial Intelligence and Statistics (AISTATS), 2011.
Directed acyclic graphs (DAGs) are a popular framework to express multivariate probability distributions. Acyclic directed mixed graphs (ADMGs) are generalizations of DAGs that can succinctly capture much richer sets of conditional independencies, and are especially useful in modeling the effects of latent variables implicitly. Unfortunately, there are currently no parameterizations of general ADMGs. In this paper, we apply recent work on cumulative distribution networks and copulas to propose one general construction for ADMG models. We consider a simple parameter estimation approach, and report some encouraging experimental results.
@inproceedings{SilBluTeh2011a,
author = {Silva, R. and Blundell, C. and Teh, Y. W.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Mixed Cumulative Distribution Networks},
year = {2011},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compstats/SilBluTeh2011a.pdf},
bdsk-url-2 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compstats/SilBluTeh2011a-supp.pdf}
}
Y. W. Teh
,
C. Blundell
,
L. T. Elliott
,
Modelling Genetic Variations with Fragmentation-Coagulation Processes, in Advances in Neural Information Processing Systems (NeurIPS), 2011.
We propose a novel class of Bayesian nonparametric models for sequential data called fragmentation-coagulation processes (FCPs). FCPs model a set of sequences using a partition-valued Markov process which evolves by splitting and merging clusters. An FCP is exchangeable, projective, stationary and reversible, and its equilibrium distributions are given by the Chinese restaurant process. As opposed to hidden Markov models, FCPs allow for flexible modelling of the number of clusters, and they avoid label switching non-identifiability problems. We develop an efficient Gibbs sampler for FCPs which uses uniformization and the forward-backward algorithm. Our development of FCPs is motivated by applications in population genetics, and we demonstrate the utility of FCPs on problems of genotype imputation with phased and unphased SNP data.
@inproceedings{TehBluEll2011a,
author = {Teh, Y. W. and Blundell, C. and Elliott, L. T.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Modelling Genetic Variations with Fragmentation-Coagulation Processes},
year = {2011},
bdsk-url-1 = {https://papers.nips.cc/paper/4211-modelling-genetic-variations-using-fragmentation-coagulation-processes},
bdsk-url-2 = {https://papers.nips.cc/paper/4211-modelling-genetic-variations-using-fragmentation-coagulation-processes.pdf}
}
F. Wood
,
J. Gasthaus
,
C. Archambeau
,
L. James
,
Y. W. Teh
,
The Sequence Memoizer, Communications of the Association for Computing Machines, vol. 54, no. 2, 91–98, 2011.
Probabilistic models of sequences play a central role in most machine translation, automated speech recognition, lossless compression, spell-checking, and gene identification applications to name but a few. Unfortunately, real-world sequence data often exhibit long range dependencies which can only be captured by computationally challenging, complex models. Sequence data arising from natural processes also often exhibits power-law properties, yet common sequence models do not capture such properties. The sequence memoizer is a new hierarchical Bayesian model for discrete sequence data that captures long range dependencies and power-law characteristics, while remaining computationally attractive. Its utility as a language model and general purpose lossless compressor is demonstrated.
@article{WooGasArc2011a,
author = {Wood, F. and Gasthaus, J. and Archambeau, C. and James, L. and Teh, Y. W.},
journal = {Communications of the Association for Computing Machines},
number = {2},
pages = {91-98},
title = {The Sequence Memoizer},
volume = {54},
year = {2011},
bdsk-url-1 = {http://cacm.acm.org/magazines/2011/2/104391-the-sequence-memoizer/fulltext},
bdsk-url-2 = {http://cacm.acm.org/magazines/2011/2/104391-the-sequence-memoizer/pdf},
bdsk-url-3 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/WooGasArc2011a.pdf},
bdsk-url-4 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/bayeslm.pdf}
}
M. Welling
,
Y. W. Teh
,
Bayesian Learning via Stochastic Gradient Langevin Dynamics, in International Conference on Machine Learning (ICML), 2011.
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization al- gorithm we show that the iterates will con- verge to samples from the true posterior dis- tribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an in- built protection against overfitting. We also propose a practical method for Monte Carlo estimates of posterior statistics which moni- tors a “sampling threshold” and collects sam- ples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with natural gradients.
@inproceedings{WelTeh2011a,
author = {Welling, M. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {{B}ayesian Learning via Stochastic Gradient {L}angevin Dynamics},
year = {2011},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compstats/WelTeh2011a.pdf}
}
2010
C. Blundell
,
Y. W. Teh
,
K. A. Heller
,
Bayesian Rose Trees, in Uncertainty in Artificial Intelligence (UAI), 2010.
Hierarchical structure is ubiquitous in data across many domains. There are many hierarchical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these methods limit discoverable hierarchies to those with binary branching structure. This limitation, while computationally convenient, is often undesirable. In this paper we explore a Bayesian hierarchical clustering algorithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy agglomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.
@inproceedings{BluTehHel2010a,
author = {Blundell, C. and Teh, Y. W. and Heller, K. A.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {{B}ayesian Rose Trees},
year = {2010},
bdsk-url-1 = {https://arxiv.org/pdf/1203.3468v1.pdf}
}
J. Gasthaus
,
Y. W. Teh
,
Improvements to the Sequence Memoizer, in Advances in Neural Information Processing Systems (NeurIPS), 2010.
The sequence memoizer is a model for sequence data with state-of-the-art performance on language modeling and compression. We propose a number of improvements to the model and inference algorithm, including an enlarged range of hyperparameters, a memory-efficient representation, and inference algorithms operating on the new representation. Our derivations are based on precise definitions of the various processes that will also allow us to provide an elementary proof of the mysterious coagulation and fragmentation properties used in the original paper on the sequence memoizer by Wood et al. (2009). We present some experimental results supporting our improvements.
@inproceedings{GasTeh2010a,
author = {Gasthaus, J. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Improvements to the Sequence Memoizer},
year = {2010},
bdsk-url-1 = {https://papers.nips.cc/paper/3938-improvements-to-the-sequence-memoizer},
bdsk-url-2 = {https://papers.nips.cc/paper/3938-improvements-to-the-sequence-memoizer.pdf},
bdsk-url-3 = {https://papers.nips.cc/paper/3938-improvements-to-the-sequence-memoizer-supplemental.zip}
}
Y. W. Teh
,
M. I. Jordan
,
Hierarchical Bayesian Nonparametric Models with Applications, in Bayesian Nonparametrics, N. Hjort, C. Holmes, P. Müller, and S. Walker, Eds. Cambridge University Press, 2010.
Hierarchical modeling is a fundamental concept in Bayesian statistics. The basic idea is that parameters are endowed with distributions which may themselves introduce new parameters, and this construction recurses. In this review we discuss the role of hierarchical modeling in Bayesian nonparametrics, focusing on models in which the infinite-dimensional parameters are treated hierarchically. For example, we consider a model in which the base measure for a Dirichlet process is itself treated as a draw from another Dirichlet process. This yields a natural recursion that we refer to as a hierarchical Dirichlet process. We also discuss hierarchies based on the Pitman-Yor process and on completely random processes. We demonstrate the value of these hierarchical constructions in a wide range of practical applications, in problems in computational biology, computer vision and natural language processing.
@incollection{TehJor2010a,
annote = {Technical Report 770. Department of Statistics, University of California
at Berkeley.
},
author = {Teh, Y. W. and Jordan, M. I.},
booktitle = {Bayesian Nonparametrics},
editor = {Hjort, N. and Holmes, C. and M{\"u}ller, P. and Walker, S.},
publisher = {Cambridge University Press},
title = {Hierarchical {B}ayesian Nonparametric Models with Applications},
year = {2010},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/TehJor2010a.pdf}
}
Y. W. Teh
,
Dirichlet Processes, in Encyclopedia of Machine Learning, Springer, 2010.
@incollection{Teh2010a,
author = {Teh, Y. W.},
booktitle = {Encyclopedia of Machine Learning},
publisher = {Springer},
title = {{D}irichlet Processes},
year = {2010},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/Teh2010a.pdf}
}
P. Orbanz
,
Y. W. Teh
,
Bayesian Nonparametric Models, in Encyclopedia of Machine Learning, Springer, 2010.
@incollection{OrbTeh2010a,
author = {Orbanz, P. and Teh, Y. W.},
booktitle = {Encyclopedia of Machine Learning},
publisher = {Springer},
title = {{B}ayesian Nonparametric Models},
year = {2010},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/OrbTeh2010a.pdf}
}
J. Gasthaus
,
F. Wood
,
Y. W. Teh
,
Lossless compression based on the Sequence Memoizer, in Data Compression Conference, 2010.
In this work we describe a sequence compression method based on combining a Bayesian nonparametric sequence model with entropy encoding. The model, a hierarchy of Pitman-Yor processes of unbounded depth previously proposed by Wood et al. [2009] in the context of language modelling, allows modelling of long-range dependencies by allowing conditioning contexts of unbounded length. We show that incremental approximate inference can be performed in this model, thereby allowing it to be used in a text compression setting. The resulting compressor reliably outperforms several PPM variants on many types of data, but is particularly effective in compressing data that exhibits power law properties.
@inproceedings{GasWooTeh2010a,
author = {Gasthaus, J. and Wood, F. and Teh, Y. W.},
booktitle = {Data Compression Conference},
title = {Lossless compression based on the Sequence Memoizer},
year = {2010},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/GasWooTeh2010a.pdf}
}
2009
V. Rao
,
Y. W. Teh
,
Spatial Normalized Gamma Processes, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 22.
Dependent Dirichlet processes (DPs) are dependent sets of random measures, each being marginally Dirichlet process distributed. They are used in Bayesian nonparametric models when the usual exchangebility assumption does not hold. We propose a simple and general framework to construct dependent DPs by marginalizing and normalizing a single gamma process over an extended space. The result is a set of DPs, each located at a point in a space such that neighboring DPs are more dependent. We describe Markov chain Monte Carlo inference, involving the typical Gibbs sampling and three different Metropolis-Hastings proposals to speed up convergence. We report an empirical study of convergence speeds on a synthetic dataset and demonstrate an application of the model to topic modeling through time.
@inproceedings{RaoTeh2009a,
author = {Rao, V. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Spatial Normalized Gamma Processes},
volume = {22},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3630-spatial-normalized-gamma-processes},
bdsk-url-2 = {https://papers.nips.cc/paper/3630-spatial-normalized-gamma-processes.pdf}
}
F. Wood
,
Y. W. Teh
,
A Hierarchical Nonparametric Bayesian Approach to Statistical Language Model Domain Adaptation, in Artificial Intelligence and Statistics (AISTATS), 2009.
In this paper we present a doubly hierarchical Pitman-Yor process language model. Its bottom layer of hierarchy consists of multiple hierarchical Pitman-Yor process language models, one each for some number of domains. The novel top layer of hierarchy consists of a mechanism to couple together multiple language models such that they share statistical strength. Intuitively this sharing results in the ?adaptation? of a latent shared language model to each domain. We introduce a general formalism capable of describing the overall model which we call the graphical Pitman-Yor process and explain how to perform Bayesian inference in it. We present encouraging language model domain adaptation results that both illustrate the potential benefits of our new model and suggest new avenues of inquiry.
@inproceedings{WooTeh2009a,
author = {Wood, F. and Teh, Y. W.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {A Hierarchical Nonparametric {B}ayesian Approach to Statistical Language Model Domain Adaptation},
year = {2009},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v5/wood09a.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v5/wood09a/wood09a.pdf}
}
D. M. Roy
,
Y. W. Teh
,
The Mondrian Process, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 21.
We describe a novel stochastic process that can be used to construct a multidimensional generalization of the stick-breaking process and which is related to the classic stick breaking process described by [Sethuraman 1994] in one dimension. We describe how the process can be applied to relational data modeling using the de Finetti representation for infinitely and partially exchangeable arrays.
@inproceedings{RoyTeh2009a,
author = {Roy, D. M. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {The {M}ondrian Process},
volume = {21},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3622-the-mondrian-process},
bdsk-url-2 = {https://papers.nips.cc/paper/3622-the-mondrian-process.pdf}
}
D. Görür
,
Y. W. Teh
,
An Efficient Sequential Monte-Carlo Algorithm for Coalescent Clustering, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 21.
We propose an efficient sequential Monte Carlo inference scheme for the recently proposed coalescent clustering model (Teh et al, 2008). Our algorithm has a quadratic runtime while those in (Teh et al, 2008) is cubic. In experiments, we were surprised to find that in addition to being more efficient, it is also a better sequential Monte Carlo sampler than the best in (Teh et al, 2008), when measured in terms of variance of estimated likelihood and effective sample size.
@inproceedings{GorTeh2009a,
author = {{G\"or\"ur}, D. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {An Efficient Sequential {M}onte-{C}arlo Algorithm for Coalescent Clustering},
volume = {21},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3426-an-efficient-sequential-monte-carlo-algorithm-for-coalescent-clustering},
bdsk-url-2 = {https://papers.nips.cc/paper/3426-an-efficient-sequential-monte-carlo-algorithm-for-coalescent-clustering.pdf}
}
Y. W. Teh
,
D. Görür
,
Indian Buffet Processes with Power-law Behavior, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 22.
The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We show that our power-law IBP is a good model for word occurrences in documents with improved performance over the normal IBP.
@inproceedings{TehGor2009a,
author = {Teh, Y. W. and {G\"or\"ur}, D.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {{I}ndian Buffet Processes with Power-law Behavior},
volume = {22},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3638-indian-buffet-processes-with-power-law-behavior},
bdsk-url-2 = {https://papers.nips.cc/paper/3638-indian-buffet-processes-with-power-law-behavior.pdf}
}
K. A. Heller
,
Y. W. Teh
,
D. Görür
,
Infinite Hierarchical Hidden Markov Models, in Artificial Intelligence and Statistics (AISTATS), 2009, vol. 5.
In this paper we present the Infinite Hierarchical Hidden Markov Model (IHHMM), a nonparametric generalization of Hierarchical Hidden Markov Models (HHMMs). HHMMs have been used for modeling sequential data in applications such as speech recognition, detecting topic transitions in video and extracting information from text. The IHHMM provides more flexible modeling of sequential data by allowing a potentially unbounded number of levels in the hierarchy, instead of requiring the specification of a fixed hierarchy depth. Inference and learning are performed efficiently using Gibbs sampling and a modified forward-backtrack algorithm. We show encouraging demonstrations of the workings of the IHHMM.
@inproceedings{HelTehGor2009a,
author = {Heller, K. A. and Teh, Y. W. and {G\"or\"ur}, D.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Infinite Hierarchical Hidden {M}arkov Models},
volume = {5},
year = {2009},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v5/heller09a.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v5/heller09a/heller09a.pdf}
}
F. Doshi
,
K. T. Miller
,
J. Van Gael
,
Y. W. Teh
,
Variational Inference for the Indian Buffet Process, in Artificial Intelligence and Statistics (AISTATS), 2009, vol. 5.
The Indian Buffet Process (IBP) is a nonparametric prior for latent feature models in which observations are influenced by a combination of several hidden features. For example, images may be composed of several objects or sounds may consist of several notes. Latent feature models seek to infer what these latent features from a set of observations. Current inference methods for the IBP have all relied on sampling. While these methods are guaranteed to be accurate in the limit, in practice, samplers for the IBP tend to mix slow. We develop a deterministic variational method for the IBP. We provide theoretical guarantees on its truncation bounds and demonstrate its superior performance for high dimensional data sets.
@inproceedings{DosMilVan2009a,
author = {Doshi, F. and Miller, K. T. and {Van Gael}, J. and Teh, Y. W.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Variational Inference for the {I}ndian Buffet Process},
volume = {5},
year = {2009},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v5/doshi09a.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v5/doshi09a/doshi09a.pdf}
}
J. Van Gael
,
Y. W. Teh
,
Z. Ghahramani
,
The Infinite Factorial Hidden Markov Model, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 21.
We introduces a new probability distribution over a potentially infinite number of binary Markov chains which we call the Markov Indian buffet process. This process extends the IBP to allow temporal dependencies in the hidden variables. We use this stochastic process to build a nonparametric extension of the factorial hidden Markov model. After working out an inference scheme which combines slice sampling and dynamic programming we demonstrate how the infinite factorial hidden Markov model can be used for blind source separation.
@inproceedings{VanTehGha2009a,
author = {{Van Gael}, J. and Teh, Y. W. and Ghahramani, Z.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {The Infinite Factorial Hidden {M}arkov Model},
volume = {21},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3518-the-infinite-factorial-hidden-markov-model},
bdsk-url-2 = {https://papers.nips.cc/paper/3518-the-infinite-factorial-hidden-markov-model.pdf}
}
J. Gasthaus
,
F. Wood
,
D. Görür
,
Y. W. Teh
,
Dependent Dirichlet Process Spike Sorting, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 21, 497–504.
In this paper we propose a new incremental spike sorting model that automatically eliminates refractory period violations, accounts for action potential waveform drift, and can handle “appearance” and “disappearance” of neurons. Our approach is to augment a known time-varying Dirichlet process that ties together a sequence of infinite Gaussian mixture models, one per action potential waveform observation, with an interspike-interval-dependent likelihood that prohibits refractory period violations. We demonstrate this model by showing results from sorting two publicly available neural data recordings for which the a partial ground truth labeling is known.
@inproceedings{GasWooGor2009a,
author = {Gasthaus, J. and Wood, F. and {G\"or\"ur}, D. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
pages = {497-504},
title = {Dependent {D}irichlet Process Spike Sorting},
volume = {21},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3606-dependent-dirichlet-process-spike-sorting},
bdsk-url-2 = {https://papers.nips.cc/paper/3606-dependent-dirichlet-process-spike-sorting.pdf}
}
F. Wood
,
C. Archambeau
,
J. Gasthaus
,
L. F. James
,
Y. W. Teh
,
A Stochastic Memoizer for Sequence Data, in International Conference on Machine Learning (ICML), 2009, vol. 26, 1129–1136.
We propose an unbounded-depth, hierarchical, Bayesian nonparametric model for discrete sequence data. This model can be estimated from a single training sequence, yet shares statistical strength between subsequent symbol predictive distributions in such a way that predictive performance generalizes well. The model builds on a specific parameterization of an unbounded-depth hierarchical Pitman-Yor process. We introduce analytic marginalization steps (using coagulation operators) to reduce this model to one that can be represented in time and space linear in the length of the training sequence. We show how to perform inference in such a model without truncation approximation and introduce fragmentation operators necessary to do predictive inference. We demonstrate the sequence memoizer by using it as a language model, achieving state-of-the-art results.
@inproceedings{WooArcGas2009a,
author = {Wood, F. and Archambeau, C. and Gasthaus, J. and James, L. F. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
pages = {1129-1136},
title = {A Stochastic Memoizer for Sequence Data},
volume = {26},
year = {2009},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/WooArcGas2009a.pdf}
}
G. R. Haffari
,
Y. W. Teh
,
Hierarchical Dirichlet Trees for Information Retrieval, in Proceedings of the Annual Meeting of the North American Association for Computational Linguistics and the Human Language Technology Conference, 2009.
We propose a principled probabilisitc framework which uses trees over the vocabulary to capture similarities among terms in an information retrieval setting. This allows the retrieval of documents based not just on occurrences of specific query terms, but also on similarities between terms (an effect similar to query expansion). Additionally our principled generative model exhibits an effect similar to inverse document frequency. We give encouraging experimental evidence of the superiority of the hierarchical Dirichlet tree compared to standard baselines.
@inproceedings{HafTeh2009a,
author = {Haffari, G. R. and Teh, Y. W.},
booktitle = {Proceedings of the Annual Meeting of the North American Association for Computational Linguistics and the Human Language Technology Conference},
title = {Hierarchical {D}irichlet Trees for Information Retrieval},
year = {2009},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/HafTeh2009a.pdf}
}
G. Quon
,
Y. W. Teh
,
E. Chan
,
T. Hughes
,
M. Brudno
,
Q. Morris
,
A Mixture Model for the Evolution of Gene Expression in Non-homogeneous Datasets, in Advances in Neural Information Processing Systems (NeurIPS), 2009, vol. 21.
We address the challenge of assessing conservation of gene expression in complex, non-homogeneous datasets. Recent studies have demonstrated the success of probabilistic models in studying the evolution of gene expression in simple eukaryotic organisms such as yeast, for which measurements are typically scalar and independent. Models capable of studying expression evolution in much more complex organisms such as vertebrates are particularly important given the medical and scientific interest in species such as human and mouse. We present a statistical model that makes a number of significant extensions to previous models to enable characterization of changes in expression among highly complex organisms. We demonstrate the efficacy of our method on a microarray dataset containing diverse tissues from multiple vertebrate species. We anticipate that the model will be invaluable in the study of gene expression patterns in other diverse organisms as well, such as worms and insects.
@inproceedings{QuoTehCha2009a,
author = {Quon, G. and Teh, Y. W. and Chan, E. and Hughes, T. and Brudno, M. and Morris, Q.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {A Mixture Model for the Evolution of Gene Expression in Non-homogeneous Datasets},
volume = {21},
year = {2009},
bdsk-url-1 = {https://papers.nips.cc/paper/3384-a-mixture-model-for-the-evolution-of-gene-expression-in-non-homogeneous-datasets},
bdsk-url-2 = {https://papers.nips.cc/paper/3384-a-mixture-model-for-the-evolution-of-gene-expression-in-non-homogeneous-datasets.pdf}
}
A. Asuncion
,
M. Welling
,
P. Smyth
,
Y. W. Teh
,
On Smoothing and Inference for Topic Models, in Uncertainty in Artificial Intelligence (UAI), 2009.
Latent Dirichlet analysis, or topic modeling, is a flexible latent variable framework for modeling high-dimensional sparse count data. Various learning algorithms have been developed in recent years, including collapsed Gibbs sampling, variational inference, and maximum a posteriori estimation, and this variety motivates the need for careful empirical comparisons. In this paper, we highlight the close connections between these approaches. We find that the main differences are attributable to the amount of smoothing applied to the counts. When the hyperparameters are optimized, the differences in performance among the algorithms diminish significantly. The ability of these algorithms to achieve solutions of comparable accuracy gives us the freedom to select computationally efficient approaches. Using the insights gained from this comparative study, we show how accurate topic models can be learned in several seconds on text corpora with thousands of documents.
@inproceedings{AsuWelSmy2009a,
author = {Asuncion, A. and Welling, M. and Smyth, P. and Teh, Y. W.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {On Smoothing and Inference for Topic Models},
year = {2009},
bdsk-url-1 = {https://arxiv.org/pdf/1205.2662.pdf}
}
2008
H. L. Chieu
,
W. S. Lee
,
Y. W. Teh
,
Cooled and Relaxed Survey Propagation for MRFs, in Advances in Neural Information Processing Systems (NeurIPS), 2008, vol. 20.
We describe a new algorithm, Relaxed Survey Propagation (RSP), for finding MAP configurations in Markov random fields. We compare its performance with state-of-the-art algorithms including the max-product belief propagation, its sequential tree-reweighted variant, residual (sum-product) belief propagation, and tree-structured expectation propagation. We show that it outperforms all approaches for Ising models with mixed couplings, as well as on a web person disambiguation task formulated as a supervised clustering problem.
@inproceedings{ChiLeeTeh2008a,
author = {Chieu, H. L. and Lee, W. S. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Cooled and Relaxed Survey Propagation for {MRFs}},
volume = {20},
year = {2008},
bdsk-url-1 = {https://papers.nips.cc/paper/3308-cooled-and-relaxed-survey-propagation-for-mrfs},
bdsk-url-2 = {https://papers.nips.cc/paper/3308-cooled-and-relaxed-survey-propagation-for-mrfs.pdf}
}
Y. W. Teh
,
H. Daume III
,
D. M. Roy
,
Bayesian Agglomerative Clustering with Coalescents, in Advances in Neural Information Processing Systems (NeurIPS), 2008, vol. 20.
We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman’s coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over the state-of-the-art, and demonstrate our approach in document clustering and phylolinguistics.
@inproceedings{TehDauRoy2008a,
author = {Teh, Y. W. and {Daume III}, H. and Roy, D. M.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {{B}ayesian Agglomerative Clustering with Coalescents},
volume = {20},
year = {2008},
bdsk-url-1 = {http://papers.nips.cc/paper/3266-bayesian-agglomerative-clustering-with-coalescents},
bdsk-url-2 = {http://papers.nips.cc/paper/3266-bayesian-agglomerative-clustering-with-coalescents.pdf}
}
J. Van Gael
,
Y. Saatci
,
Y. W. Teh
,
Z. Ghahramani
,
Beam Sampling for the Infinite Hidden Markov Model, in International Conference on Machine Learning (ICML), 2008, vol. 25.
The infinite hidden Markov model is a nonparametric extension of the widely used hidden Markov model. Our paper introduces a new inference algorithm for the infinite Hidden Markov model called beam sampling. Beam sampling combines slice sampling, which limits the number of states considered at each time step to a finite number, with dynamic programming, which samples whole state trajectories efficiently. Our algorithm typically outperforms the Gibbs sampler and is more robust. We present applications of iHMM inference using the beam sampler on changepoint detection and text prediction problems.
@inproceedings{VanSaaTeh2008a,
author = {{Van Gael}, J. and Saatci, Y. and Teh, Y. W. and Ghahramani, Z.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {Beam Sampling for the Infinite Hidden {M}arkov Model},
volume = {25},
year = {2008},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/VanSaaTeh2008a.pdf}
}
M. Welling
,
Y. W. Teh
,
H. J. Kappen
,
Hybrid Variational/Gibbs Collapsed Inference in Topic Models, in Uncertainty in Artificial Intelligence (UAI), 2008, vol. 24.
Variational Bayesian inference and (collapsed) Gibbs sampling are the two important classes of inference algorithms for Bayesian networks. Both have their advantages and disadvantages: collapsed Gibbs sampling is unbiased but is also inefficient for large count values and requires averaging over many samples to reduce variance. On the other hand, variational Bayesian inference is efficient and accurate for large count values but suffers from bias for small counts. We propose a hybrid algorithm that combines the best of both worlds: it samples very small counts and applies variational updates to large counts. This hybridization is shown to significantly improve testset perplexity relative to variational inference at no computational cost.
@inproceedings{WelTehKap2008a,
author = {Welling, M. and Teh, Y. W. and Kappen, H. J.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {Hybrid {Variational/Gibbs} Collapsed Inference in Topic Models},
volume = {24},
year = {2008},
bdsk-url-1 = {https://arxiv.org/pdf/1206.3297v1.pdf}
}
Y. W. Teh
,
K. Kurihara
,
M. Welling
,
Collapsed Variational Inference for HDP, in Advances in Neural Information Processing Systems (NeurIPS), 2008, vol. 20.
A wide variety of Dirichlet-multinomial ‘topic’ models have found interesting applications in recent years. While Gibbs sampling remains an important method of inference in such models, variational techniques have certain advantages such as easy assessment of convergence, easy optimization without the need to maintain detailed balance, a bound on the marginal likelihood, and side-stepping of issues with topic-identifiability. The most accurate variational technique thus far, namely collapsed variational latent Dirichlet allocation, did not deal with model selection nor did it include inference for hyperparameters. We address both issues by generalizing the technique, obtaining the first variational algorithm to deal with the
hierarchical Dirichlet process and to deal with hyperparameters of Dirichlet variables. Experiments show a significant improvement in accuracy.
@inproceedings{TehKurWel2008a,
author = {Teh, Y. W. and Kurihara, K. and Welling, M.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Collapsed Variational Inference for {HDP}},
volume = {20},
year = {2008},
bdsk-url-1 = {https://papers.nips.cc/paper/3342-collapsed-variational-inference-for-hdp},
bdsk-url-2 = {https://papers.nips.cc/paper/3342-collapsed-variational-inference-for-hdp.pdf}
}
2007
K. Kurihara
,
M. Welling
,
Y. W. Teh
,
Collapsed Variational Dirichlet Process Mixture Models, in Proceedings of the International Joint Conference on Artificial Intelligence, 2007, vol. 20.
Nonparametric Bayesian mixture models, in particular Dirichlet process (DP) mixture models, have shown great promise for density estimation and data clustering. Given the size of today’s datasets, computational efficiency becomes an essential ingredient in the applicability of these techniques to real world data. We study and experimentally compare a number of variational Bayesian (VB) approximations to the DP mixture model. In particular we consider the standard VB approximation where parameters are assumed to be independent from cluster assignment variables, and a novel collapsed VB approximation where mixture weights are marginalized out. For both VB approximations we consider two different ways to approximate the DP, by truncating the stick-breaking construction, and by using a finite mixture model with a symmetric
Dirichlet prior.
@inproceedings{KurWelTeh2007a,
author = {Kurihara, K. and Welling, M. and Teh, Y. W.},
booktitle = {Proceedings of the International Joint Conference on Artificial Intelligence},
title = {Collapsed Variational {D}irichlet Process Mixture Models},
volume = {20},
year = {2007},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/papers/KurWelTeh2007a.pdf}
}
Y. W. Teh
,
D. Görür
,
Z. Ghahramani
,
Stick-breaking Construction for the Indian Buffet Process, in Artificial Intelligence and Statistics (AISTATS), 2007, vol. 11.
The Indian buffet process (IBP) is a Bayesian nonparametric distribution whereby objects are modelled using an unbounded number of latent features. In this paper we derive a stick-breaking representation for the IBP. Based on this new representation, we develop slice samplers for the IBP that are efficient, easy to implement and are more generally applicable than the currently available Gibbs sampler. This representation, along with the work of Thibaux and Jordan, also illuminates interesting theoretical connections between the IBP, Chinese restaurant processes, Beta processes and Dirichlet processes.
@inproceedings{TehGorGha2007a,
author = {Teh, Y. W. and {G\"or\"ur}, D. and Ghahramani, Z.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Stick-breaking Construction for the {I}ndian Buffet Process},
volume = {11},
year = {2007},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/aistats2007.pdf}
}
J. F. Cai
,
W. S. Lee
,
Y. W. Teh
,
NUS-ML: Improving Word Sense Disambiguation Using Topic Features, in Proceedings of the International Workshop on Semantic Evaluations, 2007, vol. 4.
We participated in SemEval-1 English coarse-grained all-words task (task 7), English fine-grained all-words task (task 17, subtask 3) and English coarse-grained lexical sample task (task 17, subtask 1). The same method with different labeled data is used for the tasks; SemCor is the labeled corpus used to train our system for the all words tasks while the labeled corpus that is provided is used for the lexical sample task. The knowledge sources include part-of-speech of neighboring words, single words in the surrounding context, local collocations, and syntactic patterns. In addition, we constructed a topic feature, targeted to capture the global context information, using the latent dirichlet allocation (LDA) algorithm with unlabeled corpus. A modified naive Bayes classifier is constructed to incorporate all the features. We achieved 81.6%, 57.6%, 88.7% for coarse-grained all words task, fine-grained all-words task and coarse-grained lexical sample task respectively.
@inproceedings{CaiLeeTeh2007a,
author = {Cai, J. F. and Lee, W. S. and Teh, Y. W.},
booktitle = {Proceedings of the International Workshop on Semantic Evaluations},
title = {{NUS-ML}: Improving Word Sense Disambiguation Using Topic Features},
volume = {4},
year = {2007},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/semeval2007.pdf}
}
Y. J. Lim
,
Y. W. Teh
,
Variational Bayesian Approach to Movie Rating Prediction, in Proceedings of KDD Cup and Workshop, 2007.
Singular value decomposition (SVD) is a matrix decomposition algorithm that returns the optimal (in the sense of squared error) low-rank decomposition of a matrix. SVD has found widespread use across a variety of machine learning applications, where its output is interpreted as compact and informative representations of data. The Netflix Prize challenge, and collaborative filtering in general, is an ideal application for SVD, since the data is a matrix of ratings given by users to movies. It is thus not surprising to observe that most currently successful teams use SVD, either with an extension, or to interpolate with results returned by other algorithms. Unfortunately SVD can easily overfit due to the extreme data sparsity of the matrix in the Netflix Prize challenge, and care must be taken to regularize properly.
In this paper, we propose a Bayesian approach to alleviate overfitting in SVD, where priors are introduced and all parameters are integrated out using variational inference. We show experimentally that this gives significantly improved results over vanilla SVD. For truncated SVDs of rank 5, 10, 20, and 30, our proposed Bayesian approach achieves 2.2% improvement over a na¨ıve approach, 1.6% improvement over a gradient descent approach dealing with unobserved entries properly, and 0.9% improvement over a maximum a posteriori (MAP) approach.
@inproceedings{LimTeh2007a,
author = {Lim, Y. J. and Teh, Y. W.},
booktitle = {Proceedings of KDD Cup and Workshop},
title = {Variational {B}ayesian Approach to Movie Rating Prediction},
year = {2007},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/bayesml/kddcup2007.pdf}
}
J. F. Cai
,
W. S. Lee
,
Y. W. Teh
,
Improving Word Sense Disambiguation Using Topic Features, in Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-coNLL), 2007.
This paper presents a novel approach for exploiting the global context for the task of word sense disambiguation (WSD). This is done by using topic features constructed using the latent dirichlet allocation (LDA) algorithm on unlabeled data. The features are incorporated into a modified naive Bayes network alongside other features such as part-of-speech of neighboring words, single words in the surrounding context, local collocations, and syntactic patterns. In both the English all-words task and the English lexical sample task, the method achieved significant improvement over the simple naive Bayes classifier and higher accuracy than the best official scores on Senseval-3 for both task.
@inproceedings{CaiLeeTeh2007b,
author = {Cai, J. F. and Lee, W. S. and Teh, Y. W.},
booktitle = {Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-coNLL)},
title = {Improving Word Sense Disambiguation Using Topic Features},
year = {2007},
bdsk-url-1 = {http://www.aclweb.org/anthology/D/D07/D07-1108},
bdsk-url-2 = {http://www.aclweb.org/anthology/D/D07/D07-1108.pdf}
}
Y. W. Teh
,
D. Newman
,
M. Welling
,
A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation, in Advances in Neural Information Processing Systems (NeurIPS), 2007, vol. 19, 1353–1360.
Latent Dirichlet allocation (LDA) is a Bayesian network that has recently gained much popularity in applications ranging from document modeling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian inference algorithm for LDA, and show that it is computationally efficient, easy to implement and significantly more accurate than standard variational Bayesian inference for LDA.
@inproceedings{TehNewWel2007a,
author = {Teh, Y. W. and Newman, D. and Welling, M.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
pages = {1353-1360},
title = {A Collapsed Variational {B}ayesian Inference Algorithm for Latent {D}irichlet Allocation},
volume = {19},
year = {2007},
bdsk-url-1 = {http://papers.nips.cc/paper/3113-a-collapsed-variational-bayesian-inference-algorithm-for-latent-dirichlet-allocation},
bdsk-url-2 = {http://papers.nips.cc/paper/3113-a-collapsed-variational-bayesian-inference-algorithm-for-latent-dirichlet-allocation.pdf}
}
2006
Y. W. Teh
,
A Hierarchical Bayesian Language Model based on Pitman-Yor Processes, in Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, 2006, 985–992.
@inproceedings{Teh2006b,
author = {Teh, Y. W.},
booktitle = {Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics},
pages = {985-992},
title = {A Hierarchical {B}ayesian Language Model based on {P}itman-{Y}or Processes},
year = {2006},
bdsk-url-1 = {http://aclanthology.info/papers/a-hierarchical-bayesian-language-model-based-on-pitman-yor-processes},
bdsk-url-2 = {http://aclweb.org/anthology/P06-1124}
}
Y. W. Teh
,
A Bayesian Interpretation of Interpolated Kneser-Ney, School of Computing, National University of Singapore, TRA2/06, 2006.
Interpolated Kneser-Ney is one of the best smoothing methods for n-gram language models. Previous explanations for its superiority have been based on intuitive and empirical justifications of specific properties of the method. We propose a novel interpretation of interpolated Kneser-Ney as approximate inference in a hierarchical Bayesian model consisting of Pitman-Yor processes. As opposed to past explanations, our interpretation can recover exactly the formulation of interpolated Kneser-Ney, and performs better than interpolated Kneser-Ney when a better inference procedure is used.
@techreport{Teh2006a,
author = {Teh, Y. W.},
institution = {School of Computing, National University of Singapore},
number = {TRA2/06},
title = {A {B}ayesian Interpretation of Interpolated {K}neser-{N}ey},
year = {2006},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compling/hpylm.pdf}
}
E. P. Xing
,
K. Sohn
,
M. I. Jordan
,
Y. W. Teh
,
Bayesian Multi-population Haplotype Inference via a Hierarchical Dirichlet process mixture, in International Conference on Machine Learning (ICML), 2006, vol. 23.
Uncovering the haplotypes of single nucleotide polymorphisms and their population demography is essential for many biological and medical applications. Methods for haplotype inference developed thus far—including methods based on coalescence, finite and infinite mixtures, and maximal parsimony— ignore the underlying population structure in the genotype data. As noted by Pritchard (2001), different populations can share certain portion of their genetic ancestors, as well as have their own genetic components through migration and diversification. In this paper, we address the problem of multipopulation haplotype inference. We capture cross-population structure using a nonparametric Bayesian prior known as the hierarchical Dirichlet process (HDP) (Teh et al., 2006), conjoining this prior with a recently developed Bayesian methodology for haplotype phasing known as DP-Haplotyper (Xing et al., 2004). We also develop an efficient sampling algorithm for the HDP based on a two-level nested Polya urn scheme. We show that our model outperforms extant algorithms on both simulated and real biological data.
@inproceedings{XinSohJor2006a,
author = {Xing, E. P. and Sohn, K. and Jordan, M. I. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {{B}ayesian Multi-population Haplotype Inference via a Hierarchical {D}irichlet process mixture},
volume = {23},
year = {2006},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/icml2006.pdf}
}
Y. W. Teh
,
M. I. Jordan
,
M. J. Beal
,
D. M. Blei
,
Hierarchical Dirichlet Processes, Journal of the American Statistical Association, vol. 101, no. 476, 1566–1581, 2006.
We consider problems involving groups of data, where each observation within a group is a draw from a mixture model, and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes in terms of a stick-breaking process, and a generalization of the Chinese restaurant process that we refer to as the “Chinese restaurant franchise.” We present Markov chain Monte Carlo algorithms for posterior inference in hierarchical Dirichlet process mixtures, and describe applications to problems in information retrieval and text modelling.
@article{TehJorBea2006a,
author = {Teh, Y. W. and Jordan, M. I. and Beal, M. J. and Blei, D. M.},
journal = {Journal of the American Statistical Association},
number = {476},
pages = {1566-1581},
title = {Hierarchical {D}irichlet Processes},
volume = {101},
year = {2006},
bdsk-url-1 = {http://www.jstor.org/stable/27639773},
bdsk-url-2 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/hdp2004.pdf}
}
G. E. Hinton
,
S. Osindero
,
Y. W. Teh
,
A Fast Learning Algorithm for Deep Belief Networks, Neural Computation, vol. 18, no. 7, 1527–1554, 2006.
We show how to use “complementary priors” to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
@article{HinOsiTeh2006a,
author = {Hinton, G. E. and Osindero, S. and Teh, Y. W.},
journal = {Neural Computation},
number = {7},
pages = {1527-1554},
title = {A Fast Learning Algorithm for Deep Belief Networks},
volume = {18},
year = {2006},
bdsk-url-1 = {http://www.mitpressjournals.org/doi/abs/10.1162/neco.2006.18.7.1527},
bdsk-url-2 = {http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.7.1527}
}
G. E. Hinton
,
S. Osindero
,
M. Welling
,
Y. W. Teh
,
Unsupervised Discovery of Non-linear Structure Using Contrastive Backpropagation, Cognitive Science, vol. 30, no. 4, 725–731, 2006.
We describe a way of modeling high-dimensional data vectors by using an unsupervised, nonlinear, multilayer neural network in which the activity of each neuron-like unit makes an additive contribution to a global energy score that indicates how surprised the network is by the data vector. The connection weights that determine how the activity of each unit depends on the activities in earlier layers are learned by minimizing the energy assigned to data vectors that are actually observed and maximizing the energy assigned to “confabulations” that are generated by perturbing an observed data vector in a direction that decreases its energy under the current model.
@article{HinOsiWel2006a,
author = {Hinton, G. E. and Osindero, S. and Welling, M. and Teh, Y. W.},
journal = {Cognitive Science},
number = {4},
pages = {725-731},
title = {Unsupervised Discovery of Non-linear Structure Using Contrastive Backpropagation},
volume = {30},
year = {2006},
bdsk-url-1 = {http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog0000_76/abstract},
bdsk-url-2 = {http://www.stats.ox.ac.uk/\\~{}teh/papers/HinOsiWel2006a.pdf}
}
W. S. Lee
,
X. Zhang
,
Y. W. Teh
,
Semi-supervised Learning in Reproducing Kernel Hilbert Spaces Using Local Invariances, School of Computing, National University of Singapore, TRB3/06, 2006.
We propose a framework for semi-supervised learning in reproducing kernel Hilbert spaces using local invariances that explicitly characterize the behavior of the target function around both labeled and unlabeled data instances. Such invariances include: invariance to small changes to the data instances, invariance to averaging across a small neighbourhood around data instances, and invariance to local transformations such as translation and rotation. These invariances are approximated by minimizing loss functions on derivatives and local averages of the functions. We use a regularized cost function, consisting of the sum of loss functions penalized with the squared norm of the function, and give a representer theorem showing that an optimal function can be represented as a linear combination of a finite number of basis functions. For the representer theorem to hold, the derivatives and local averages are required to be bounded linear functionals in the reproducing kernel Hilbert space. We show that this is true in the reproducing kernel Hilbert spaces defined by Gaussian and polynomial kernels.
@techreport{LeeZhaTeh2006a,
author = {Lee, W. S. and Zhang, X. and Teh, Y. W.},
institution = {School of Computing, National University of Singapore},
number = {TRB3/06},
title = {Semi-supervised Learning in Reproducing Kernel {H}ilbert Spaces Using Local Invariances},
year = {2006},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/invariances/semisup.pdf}
}
2005
Y. W. Teh
,
M. I. Jordan
,
M. J. Beal
,
D. M. Blei
,
Sharing Clusters among Related Groups: Hierarchical Dirichlet Processes, in Advances in Neural Information Processing Systems (NeurIPS), 2005, vol. 17.
We propose the hierarchical Dirichlet process (HDP), a nonparametric Bayesian model for clustering problems involving multiple groups of data. Each group of data is modeled with a mixture, with the number of components being open-ended and inferred automatically by the model. Further, components can be shared across groups, allowing dependencies across groups to be modeled effectively as well as conferring generalization to new groups. Such grouped clustering problems occur often in
practice, e.g. in the problem of topic discovery in document corpora. We report experimental results on three text corpora showing the effective and superior performance of the HDP over previous models.
@inproceedings{TehJorBea2005a,
author = {Teh, Y. W. and Jordan, M. I. and Beal, M. J. and Blei, D. M.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Sharing Clusters among Related Groups: Hierarchical {D}irichlet Processes},
volume = {17},
year = {2005},
bdsk-url-1 = {https://papers.nips.cc/paper/2698-sharing-clusters-among-related-groups-hierarchical-dirichlet-processes},
bdsk-url-2 = {https://papers.nips.cc/paper/2698-sharing-clusters-among-related-groups-hierarchical-dirichlet-processes.pdf}
}
J. Edwards
,
Y. W. Teh
,
D. A. Forsyth
,
R. Bock
,
M. Maire
,
G. Vesom
,
Making Latin Manuscripts Searchable using gHMM’s, in Advances in Neural Information Processing Systems (NeurIPS), 2005, vol. 17.
We describe a method that can make a scanned, handwritten mediaeval latin manuscript accessible to full text search. A generalized HMM is fitted, using transcribed latin to obtain a transition model and one example each of 22 letters to obtain an emission model. We show results for unigram, bigram and trigram models. Our method transcribes 25 pages of a manuscript of Terence with fair accuracy (75% of letters correctly transcribed). Search results are very strong; we use examples of variant spellings to demonstrate that the search respects the ink of the document. Furthermore, our model produces fair searches on a document from which we obtained no training data.
@inproceedings{EdwTehFor2005a,
author = {Edwards, J. and Teh, Y. W. and Forsyth, D. A. and Bock, R. and Maire, M. and Vesom, G.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Making Latin Manuscripts Searchable using {gHMM}'s},
volume = {17},
year = {2005},
bdsk-url-1 = {http://papers.nips.cc/paper/2706-making-latin-manuscripts-searchable-using-ghmms},
bdsk-url-2 = {http://papers.nips.cc/paper/2706-making-latin-manuscripts-searchable-using-ghmms.pdf}
}
Y. W. Teh
,
M. Seeger
,
M. I. Jordan
,
Semiparametric Latent Factor Models, in Artificial Intelligence and Statistics (AISTATS), 2005, vol. 10.
We propose a semiparametric model for regression problems involving multiple response variables. The model makes use of a set of Gaussian processes that are linearly mixed to capture dependencies that may exist among the response variables. We propose an efficient approximate inference scheme for this semiparametric model whose complexity is linear in the number of training data points. We present experimental results in the domain of multi-joint robot arm dynamics.
@inproceedings{TehSeeJor2005a,
author = {Teh, Y. W. and Seeger, M. and Jordan, M. I.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Semiparametric Latent Factor Models},
volume = {10},
year = {2005},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/aistats2005.pdf}
}
M. Seeger
,
Y. W. Teh
,
M. I. Jordan
,
Semiparametric Latent Factor Models, Division of Computer Science, University of California at Berkeley, 2005.
We propose a semiparametric model for regression and classification problems involving multiple response variables. The model makes use of a set of Gaussian processes to model the relationship to the inputs in a nonparametric fashion. Conditional dependencies between the responses can be captured through a linear mixture of the driving processes. This feature becomes important if some of the responses of predictive interest are less densely supplied by observed data than related auxiliary ones. We propose an efficient approximate inference scheme for this semiparametric model whose complexity is linear in the number of training data points.
@techreport{SeeTehJor2005a,
author = {Seeger, M. and Teh, Y. W. and Jordan, M. I.},
institution = {Division of Computer Science, University of California at Berkeley},
title = {Semiparametric Latent Factor Models},
year = {2005},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/slfm2005.pdf}
}
M. Welling
,
T. Minka
,
Y. W. Teh
,
Structured Region Graphs: Morphing EP into GBP, in Uncertainty in Artificial Intelligence (UAI), 2005, vol. 21.
GBP and EP are two successful algorithms for approximate probabilistic inference, which are based on different approximation strategies. An open problem in both algorithms has been how to choose an appropriate approximation structure. We introduce ’structured region graphs’, a formalism which marries these two strategies, reveals a deep connection between them, and suggests how to choose good approximation structures. In this formalism, each region has an internal structure which defines an exponential family, whose sufficient statistics must be matched by the parent region. Reduction operators on these structures allow conversion between EP and GBP free energies. Thus it is revealed that all EP approximations on discrete variables are special cases of GBP, and conversely that some wellknown GBP approximations, such as overlapping squares, are special cases of EP. Furthermore, region graphs derived from EP have a number of good structural properties, including maxent-normality and overall counting number of one. The result is a convenient framework for producing high-quality approximations with a user-adjustable level of complexity.
@inproceedings{WelMinTeh2005a,
author = {Welling, M. and Minka, T. and Teh, Y. W.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {Structured Region Graphs: Morphing {EP} into {GBP}},
volume = {21},
year = {2005},
bdsk-url-1 = {https://arxiv.org/pdf/1207.1426v1.pdf}
}
2004
M. Welling
,
M. Rosen-Zvi
,
Y. W. Teh
,
Approximate Inference by Markov Chains on Union Spaces, in International Conference on Machine Learning (ICML), 2004, vol. 21.
@inproceedings{WelRosTeh2004a,
author = {Welling, M. and Rosen-Zvi, M. and Teh, Y. W.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {Approximate Inference by Markov Chains on Union Spaces},
volume = {21},
year = {2004},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/papers/WelRosTeh2004a.pdf}
}
M. Welling
,
Y. W. Teh
,
Linear Response Algorithms for Approximate Inference in Graphical Models, Neural Computation, vol. 16, 197–221, 2004.
Belief propagation (BP) on cyclic graphs is an efficient algorithm for computing approximate marginal probability distributions over single nodes and neighboring nodes in the graph. However, it does not prescribe a way to compute joint distributions over pairs of distant nodes in the graph. In this article, we propose two new algorithms for approximating these pairwise probabilities, based on the linear response theorem. The first is a propagation algorithm that is shown to converge if BP converges to a stable fixed point. The second algorithm is based on matrix inversion. Applying these ideas to gaussian random fields, we derive a propagation algorithm for computing the inverse of a matrix.
@article{WelTeh2004a,
author = {Welling, M. and Teh, Y. W.},
journal = {Neural Computation},
pages = {197-221},
title = {Linear Response Algorithms for Approximate Inference in Graphical Models},
volume = {16},
year = {2004},
bdsk-url-1 = {http://cognet.mit.edu/journal/10.1162/08997660460734056},
bdsk-url-2 = {http://cognet.mit.edu/system/cogfiles/journalpdfs/08997660460734056.pdf}
}
T. Miller
,
A. C. Berg
,
J. Edwards
,
M. Maire
,
R. White
,
Y. W. Teh
,
E. Learned-Miller
,
D. A. Forsyth
,
Faces and Names in the News, in Proceedings of the Conference on Computer Vision and Pattern Recognition, 2004.
We show quite good face clustering is possible for a dataset of inaccurately and ambiguously labelled face images. Our dataset is 44,773 face images, obtained by applying a face finder to approximately half a million captioned news images. This dataset is more realistic than usual face recognition datasets, because it contains faces captured “in the wild” in a variety of configurations with respect to the camera, taking a variety of expressions, and under illumination of widely varying color. Each face image is associated with a set of names, automatically extracted from the associated caption. Many, but not all such sets contain the
correct name.
We cluster face images in appropriate discriminant coordinates. We use a clustering procedure to break ambiguities in labelling and identify incorrectly labelled faces. A merging procedure then identifies variants of names that refer to the same individual. The resulting representation can be used to label faces in news images or to organize news pictures by individuals present.
An alternative view of our procedure is as a process that cleans up noisy supervised data. We demonstrate how to use entropy measures to evaluate such procedures.
@inproceedings{MilBerEdw2004a,
author = {Miller, T. and Berg, A. C. and Edwards, J. and Maire, M. and White, R. and Teh, Y. W. and Learned-Miller, E. and Forsyth, D. A.},
booktitle = {Proceedings of the Conference on Computer Vision and Pattern Recognition},
title = {Faces and Names in the News},
year = {2004},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/compvision/cvpr2004.pdf}
}
Y. W. Teh
,
M. I. Jordan
,
M. J. Beal
,
D. M. Blei
,
Hierarchical Dirichlet Processes, Department of Statistics, University of California at Berkeley, 653, 2004.
We consider problems involving groups of data, where each observation within a group is a draw from a mixture model, and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes in terms of a stick-breaking process, and a generalization of the Chinese restaurant process that we refer to as the “Chinese restaurant franchise.” We present Markov chain Monte Carlo algorithms for posterior inference in hierarchical Dirichlet process mixtures, and describe applications to problems in information retrieval and text modelling.
@techreport{TehJorBea2004a,
author = {Teh, Y. W. and Jordan, M. I. and Beal, M. J. and Blei, D. M.},
institution = {Department of Statistics, University of California at Berkeley},
number = {653},
title = {Hierarchical {D}irichlet Processes},
year = {2004},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/npbayes/hdp2004.pdf}
}
2003
Y. W. Teh
,
M. Welling
,
S. Osindero
,
G. E. Hinton
,
Energy-Based Models for Sparse Overcomplete Representations, Journal of Machine Learning Research (JMLR), vol. 4, 1235–1260, 2003.
@article{TehWelOsi2003a,
author = {Teh, Y. W. and Welling, M. and Osindero, S. and Hinton, G. E.},
journal = {Journal of Machine Learning Research (JMLR)},
pages = {1235-1260},
title = {Energy-Based Models for Sparse Overcomplete Representations},
volume = {4},
year = {2003},
bdsk-url-1 = {http://www.jmlr.org/papers/v4/teh03a.html},
bdsk-url-2 = {http://www.jmlr.org/papers/volume4/teh03a/teh03a.pdf}
}
Y. W. Teh
,
Bethe Free Energy and Contrastive Divergence Approximations for Undirected Graphical Models, PhD thesis, Department of Computer Science, University of Toronto, 2003.
As the machine learning community tackles more complex and harder problems, the graphical models needed to solve such problems become larger and more complicated. As a result performing inference and learning exactly for such graphical models become ever more expensive, and approximate inference and learning techniques become ever more prominent.
There are a variety of techniques for approximate inference and learning in the literature. This thesis contributes some new ideas in the products of experts (PoEs) class of models (Hinton, 2002), and the Bethe free energy approximations (Yedidia et al., 2001).
For PoEs, our contribution is in developing new PoE models for continuous-valued domains. We developed RBMrate, a model for discretized continuous-valued data. We applied it to face recognition to demonstrate its abilities. We also developed energy-based models (EBMs) – flexible probabilistic models where the building blocks consist of energy terms computed using a feed-forward network. We show that standard square noiseless independent components analysis (ICA) (Bell and Sejnowski, 1995) can be viewed as a restricted form of EBMs. Extending this relationship with ICA, we describe sparse and over-complete representations of data where the inference process is trivial since it is simply an EBM.
For Bethe free energy approximations, our contribution is a theory relating belief propagation and iterative scaling. We show that both belief propagation and iterative scaling updates can be derived as fixed point equations for constrained minimization of the Bethe free energy. This allows usto develop a new algorithm to directly minimize the Bethe free energy, and to apply
the Bethe free energy to learning in addition to inference. We also describe improvements to the efficiency of standard learning algorithms for undirected graphical models (Jirousek and Preucil, 1995).
@phdthesis{Teh2003a,
author = {Teh, Y. W.},
school = {Department of Computer Science, University of Toronto},
title = {Bethe Free Energy and Contrastive Divergence Approximations for Undirected Graphical Models},
year = {2003},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/theses/phdthesis.pdf}
}
Y. W. Teh
,
S. Roweis
,
Automatic Alignment of Local Representations, in Advances in Neural Information Processing Systems (NeurIPS), 2003, vol. 15.
We present an automatic alignment procedure which maps the disparate internal representations learned by several local dimensionality reduction experts into a single, coherent global coordinate system for the original data space. Our algorithm can be applied to any set of experts, each of which produces a low-dimensional local representation of a highdimensional input. Unlike recent efforts to coordinate such models by modifying their objective functions [1, 2], our algorithm is invoked after training and applies an efficient eigensolver to post-process the trained models. The post-processing has no local optima and the size of the system it must solve scales with the number of local models rather than the number of original data points, making it more efficient than model-free algorithms such as Isomap [3] or LLE [4].
@inproceedings{TehRow2003a,
author = {Teh, Y. W. and Roweis, S.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Automatic Alignment of Local Representations},
volume = {15},
year = {2003},
bdsk-url-1 = {http://papers.nips.cc/paper/2180-automatic-alignment-of-local-representations},
bdsk-url-2 = {http://papers.nips.cc/paper/2180-automatic-alignment-of-local-representations.pdf}
}
Y. W. Teh
,
M. Welling
,
On Improving the Efficiency of the Iterative Proportional Fitting Procedure, in Artificial Intelligence and Statistics (AISTATS), 2003, vol. 9.
Iterative proportional fitting (IPF) on junction trees is an important tool for learning in graphical models. We identify the propagation and IPF updates on the junction tree as fixed point equations of a single constrained entropy maximization problem. This allows a more efficient message updating protocol than the well known effective IPF of Jirousek and Preucil (1995). When the junction tree has an intractably large maximum clique size we propose to maximize an approximate constrained entropy based on region graphs (Yedidia et al., 2002). To maximize the new objective we propose a “loopy” version of IPF. We show that this yields accurate estimates of the weights of undirected graphical models in a simple experiment.
@inproceedings{TehWel2003a,
author = {Teh, Y. W. and Welling, M.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {On Improving the Efficiency of the Iterative Proportional Fitting Procedure},
volume = {9},
year = {2003},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/inference/aistats2003.pdf}
}
M. Welling
,
Y. W. Teh
,
Approximate Inference in Boltzmann Machines, Artificial Intelligence, vol. 143, no. 1, 19–50, 2003.
Inference in Boltzmann machines is NP-hard in general. As a result approximations are often necessary. We discuss first order mean field and second order Onsager truncations of the Plefka expansion of the Gibbs free energy. The Bethe free energy is introduced and rewritten as a Gibbs free energy. From there a convergent belief optimization algorithm is derived to minimize the Bethe free energy. An analytic expression for the linear response estimate of the covariances is found which is exact on Boltzmann trees. Finally, a number of theorems is proven concerning the Plefka expansion, relating the first order mean field and the second order Onsager approximation to the Bethe approximation. Experiments compare mean field approximation, Onsager approximation, belief propagation and belief optimization.
@article{WelTeh2003a,
author = {Welling, M. and Teh, Y. W.},
journal = {Artificial Intelligence},
number = {1},
pages = {19-50},
title = {Approximate Inference in {B}oltzmann Machines},
volume = {143},
year = {2003},
bdsk-url-1 = {http://www.sciencedirect.com/science/article/pii/S0004370202003612},
bdsk-url-2 = {http://www.stats.ox.ac.uk/\\~{}teh/research/inference/aij2003.pdf}
}
2002
Y. W. Teh
,
M. Welling
,
The Unified Propagation and Scaling Algorithm, in Advances in Neural Information Processing Systems (NeurIPS), 2002, vol. 14.
In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can be solved by approximating the KL divergence with a Bethe free energy. The algorithm we derive is closely related to both loopy belief propagation and iterative scaling. This unified propagation and scaling algorithm reduces to a convergent alternative to loopy belief propagation when no constraints are present. Experiments show the viability of our algorithm.
@inproceedings{TehWel2002a,
author = {Teh, Y. W. and Welling, M.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {The Unified Propagation and Scaling Algorithm},
volume = {14},
year = {2002},
bdsk-url-1 = {http://papers.nips.cc/paper/2001-the-unified-propagation-and-scaling-algorithm},
bdsk-url-2 = {http://papers.nips.cc/paper/2001-the-unified-propagation-and-scaling-algorithm.pdf}
}
S. Kakade
,
Y. W. Teh
,
S. Roweis
,
An Alternate Objective Function for Markovian Fields, in International Conference on Machine Learning (ICML), 2002, vol. 19.
In labelling or prediction tasks, a trained model’s test performance is often based on the quality of its single-time marginal distributions over labels rather than its joint distribution over label sequences. We propose using a new cost function for discriminative learning that more accurately reflects such test time conditions. We present an efficient method to compute the gradient of this cost for Maximum Entropy Markov Models, Conditional Random Fields, and for an extension of these models involving hidden states. Our experimental results show that the new cost can give significant improvements and that it provides a novel and effective way of dealing with the ’label-bias’ problem.
@inproceedings{KakTehRow2002a,
author = {Kakade, S. and Teh, Y. W. and Roweis, S.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {An Alternate Objective Function for {M}arkovian Fields},
volume = {19},
year = {2002},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/newcost/icml2002.pdf}
}
2001
M. Welling
,
Y. W. Teh
,
Belief Optimization for Binary Networks : A Stable Alternative to Loopy Belief Propagation, in Uncertainty in Artificial Intelligence (UAI), 2001, vol. 17.
We present a novel inference algorithm for arbitrary, binary, undirected graphs. Unlike loopy belief propagation, which iterates fixed point equations, we directly descend on the Bethe free energy. The algorithm consists of two phases, first we update the pairwise probabilities, given the marginal probabilities at each unit,using an analytic expression. Next, we update the marginal probabilities, given the pairwise probabilities by following the negative gradient of the Bethe free energy. Both steps are guaranteed to decrease the Bethe free energy, and since it is lower bounded, the algorithm is guaranteed to converge to a local minimum. We also show that the Bethe free energy is equal to the TAP free energy up to second order in the weights. In experiments we confirm that when belief propagation converges it usually finds identical solutions as our belief optimization method. However, in cases where belief propagation fails to converge, belief optimization continues to converge to reasonable beliefs. The stable nature of belief optimization makes it ideally suited for learning graphical models from data.
@inproceedings{WelTeh2001a,
author = {Welling, M. and Teh, Y. W.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
title = {Belief Optimization for Binary Networks : A Stable Alternative to Loopy Belief Propagation},
volume = {17},
year = {2001},
bdsk-url-1 = {https://arxiv.org/pdf/1301.2317v1.pdf}
}
G. E. Hinton
,
Y. W. Teh
,
Discovering multiple constraints that are frequently Approximately Satisfied, in Uncertainty in Artificial Intelligence (UAI), 2001, vol. 17, 227–234.
Some high-dimensional data.sets can be modelled by assuming that there are many different linear constraints, each of which is Frequently Approximately Satisfied (FAS) by the data. The probability of a data vector under the model is then proportional to the product of the probabilities of its constraint violations. We describe three methods of learning products of constraints using a heavy-tailed probability distribution for the violations.
@inproceedings{HinTeh2001a,
author = {Hinton, G. E. and Teh, Y. W.},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
pages = {227-234},
title = {Discovering multiple constraints that are frequently Approximately Satisfied},
volume = {17},
year = {2001},
bdsk-url-1 = {https://arxiv.org/pdf/1301.2278v1.pdf}
}
G. E. Hinton
,
M. Welling
,
Y. W. Teh
,
S. Osindero
,
A New View of ICA, in Proceedings of the International Conference on Independent Component Analysis and Blind Signal Separation, 2001, vol. 3.
We present a new way of interpreting ICA as a probability density model and a new way of fitting this model to data. The advantage of our approach is that it suggests simple, novel extensions to overcomplete, undercomplete and multilayer non-linear versions of ICA.
@inproceedings{HinWelTeh2001a,
author = {Hinton, G. E. and Welling, M. and Teh, Y. W. and Osindero, S.},
booktitle = {Proceedings of the International Conference on Independent Component Analysis and Blind Signal Separation},
title = {A New View of {ICA}},
volume = {3},
year = {2001},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/unsup/ica2001.pdf}
}
Y. W. Teh
,
G. E. Hinton
,
Rate-Coded Restricted Boltzmann Machines for Face Recognition, in Advances in Neural Information Processing Systems (NeurIPS), 2001, vol. 13.
We describe a neurally-inspired, unsupervised learning algorithm that builds a non-linear generative model for pairs of face images from the same individual. Individuals are then recognized by finding the highest relative probability pair among all pairs that consist of a test image and an image whose identity is known. Our method compares favorably with other methods in the literature. The generative model consists of a single layer of rate-coded, non-linear feature detectors and it has the property that, given a data vector, the true posterior probability distribution over the feature detector activities can be inferred rapidly without iteration or approximation. The weights of the feature detectors are learned by comparing the correlations of pixel intensities and feature activations in two phases: When the network is observing real data and when it is observing reconstructions of real data generated from the feature activations.
@inproceedings{TehHin2001a,
author = {Teh, Y. W. and Hinton, G. E.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Rate-Coded Restricted {Boltzmann} Machines for Face Recognition},
volume = {13},
year = {2001},
bdsk-url-1 = {https://papers.nips.cc/paper/1886-rate-coded-restricted-boltzmann-machines-for-face-recognition},
bdsk-url-2 = {https://papers.nips.cc/paper/1886-rate-coded-restricted-boltzmann-machines-for-face-recognition.pdf}
}
Y. W. Teh
,
M. Welling
,
Passing and Bouncing Messages for Generalized Inference, Gatsby Computational Neuroscience Unit, University College London, GCNU TR 2001-01, 2001.
@techreport{TehWel2001a,
author = {Teh, Y. W. and Welling, M.},
institution = {Gatsby Computational Neuroscience Unit, University College London},
number = {GCNU TR 2001-01},
title = {Passing and Bouncing Messages for Generalized Inference},
year = {2001},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/inference/bpis2001.pdf}
}
2000
G. E. Hinton
,
Z. Ghahramani
,
Y. W. Teh
,
Learning to Parse Images, in Advances in Neural Information Processing Systems (NeurIPS), 2000, vol. 12.
We describe a class of probabilistic models that we call credibility networks. Using parse trees as internal representations of images, credibility networks are able to perform segmentation and recognition simultaneously, removing the need for ad hoc segmentation heuristics. Promising results in the problem of segmenting handwritten digits were obtained.
@inproceedings{HinGhaTeh2000a,
author = {Hinton, G. E. and Ghahramani, Z. and Teh, Y. W.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
title = {Learning to Parse Images},
volume = {12},
year = {2000},
bdsk-url-1 = {https://papers.nips.cc/paper/1710-learning-to-parse-images},
bdsk-url-2 = {https://papers.nips.cc/paper/1710-learning-to-parse-images.pdf}
}
1998
F. Bacchus
,
Y. W. Teh
,
Making Forward Chaining Relevant, in Proceedings of the International Conference on Artificial Intelligence Planning Systems, 1998.
Planning by forward chaining through the world space has long been dismissed as being “obviously” infeasible. Nevertheless,
this approach to planning has many advantages. Most importantly forward chaining planners maintain complete descriptions of the intermediate states that arise during the course of the plan’s execution. These states can be utilized to provide highly effective search control. Another advantage is that such planners can support richer planning representations that can model, e.g., resources and resource consumption. Forward chaining planners are still plagued however by their traditional weaknesses: a lack of goal direction, and the fact that they search totally ordered action sequences. In this paper we address the issue of goal direction. We present two algorithms that provide a forward chaining planner with more information about the goal, and allow it to avoid certain types of irrelevant state information and actions.
@inproceedings{BacTeh1998a,
author = {Bacchus, F. and Teh, Y. W.},
booktitle = {Proceedings of the International Conference on Artificial Intelligence Planning Systems},
title = {Making Forward Chaining Relevant},
year = {1998},
bdsk-url-1 = {http://www.stats.ox.ac.uk/\\~{}teh/research/planning/aips98.pdf}
}
@software{PerJenSpa2016b,
author = {Perrone, V. and Jenkins, P. A. and Spano, D. and Teh, Y. W.},
title = {NIPS 1987-2015 dataset},
year = {2016},
bdsk-url-1 = {https://archive.ics.uci.edu/ml/datasets/NIPS+Conference+Papers+1987-2015}
}
B. Lakshminarayanan
,
D. M. Roy
,
Y. W. Teh
,
Mondrian Forest. 2016.
@software{LakRoyTeh2016b,
author = {Lakshminarayanan, B. and Roy, D. M. and Teh, Y. W.},
title = {{M}ondrian Forest},
year = {2016},
bdsk-url-1 = {https://github.com/balajiln/mondrianforest}
}
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel alternative to expectation propagation (EP), a popular variational inference algorithm. SNEP is a black box variational algorithm, in that it does not require any simplifying assumptions on the distribution of interest, beyond the existence of some Monte Carlo sampler for estimating the moments of the EP tilted distributions. Further, as opposed to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server. The posterior server allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data. An independent Monte Carlo sampler is run on each compute node, with direct access only to the local data subset, but which targets an approximation to the global posterior distribution given all data across the whole cluster. This is achieved by using a distributed asynchronous implementation of SNEP to pass messages across the cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of logistic regression and neural networks.
@software{HasWebLie2016a,
author = {Hasenclever, L. and Webb, S. and Lienart, T. and Vollmer, S. and Lakshminarayanan, B. and Blundell, C. and Teh, Y. W.},
title = {Posterior Server},
year = {2016},
bdsk-url-1 = {https://github.com/BigBayes/PosteriorServer}
}
@software{LakRoyTeh2015a,
author = {Lakshminarayanan, B. and Roy, D. M. and Teh, Y. W.},
title = {PGBart},
year = {2015},
bdsk-url-1 = {https://github.com/balajiln/pgbart}
}
M. De Iorio
,
L. T. Elliott
,
S. Favaro
,
Y. W. Teh
,
HDPStructure. 2015.
@software{De-EllFav2015b,
author = {{De Iorio}, M. and Elliott, L. T. and Favaro, S. and Teh, Y. W.},
title = {HDPStructure},
year = {2015},
bdsk-url-1 = {https://github.com/BigBayes/HDPStructure}
}
L. Boyles
,
Y. W. Teh
,
CPABS: Cancer Phylogenetic Reconstruction with Aldous’ Beta Splitting. 2015.
@software{BoyTeh2015a,
author = {Boyles, L. and Teh, Y. W.},
title = {{CPABS}: Cancer Phylogenetic Reconstruction with {A}ldous' Beta Splitting},
year = {2015},
bdsk-url-1 = {https://github.com/BigBayes/CPABS}
}
2014
M. Xu
,
B. Lakshminarayanan
,
Y. W. Teh
,
J. Zhu
,
B. Zhang
,
SMS: Sampling via Moment Sharing, Advances in Neural Information Processing Systems. 2014.
We propose a distributed Markov chain Monte Carlo (MCMC) inference algorithm for large scale Bayesian posterior simulation. We assume that the dataset is partitioned and stored across nodes of a cluster. Our procedure involves an independent MCMC posterior sampler at each node based on its local partition of the data. Moment statistics of the local posteriors are collected from each sampler and propagated across the cluster using expectation propagation message passing with low communication costs. The moment sharing scheme improves posterior estimation quality by enforcing agreement among the samplers. We demonstrate the speed and inference quality of our method with empirical studies on Bayesian logistic regression and sparse linear regression with a spike-and-slab prior.
@software{XuLakTeh2014b,
author = {Xu, M. and Lakshminarayanan, B. and Teh, Y. W. and Zhu, J. and Zhang, B.},
booktitle = {Advances in Neural Information Processing Systems},
title = {{SMS}: Sampling via Moment Sharing},
year = {2014},
bdsk-url-1 = {https://github.com/BigBayes/SMS}
}