I’m a final year DPhil(PhD) student in the Department of Computer Science at University of Oxford.
My current research interest lies in Probabilistic Programming and Machine Learning, especially in optimizing the design of Probabilsitic Programming Languages (PPLs) and automating efficient inferece algorithms in Probabilsitic Programming Systems (PPSs).
D. Tolpin
,
Y. Zhou
,
H. Yang
,
Stochastically Differentiable Probabilistic Programs, arXiv preprint arXiv:2003.00704, 2020.
@article{tolpin2020stochastically,
title = {Stochastically Differentiable Probabilistic Programs},
author = {Tolpin, David and Zhou, Yuan and Yang, Hongseok},
journal = {arXiv preprint arXiv:2003.00704},
year = {2020}
}
2019
Y. Zhou
,
B. Gram-Hansen
,
T. Kohn
,
T. Rainforth
,
H. Yang
,
F. Wood
,
A Low-Level Probabilistic Programming Language
for Non-Differentiable Models, International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
We develop a new Low-level, First-order Prob- abilistic Programming Language (LF-PPL) suited for models containing a mix of contin- uous, discrete, and/or piecewise-continuous variables. The key success of this language and its compilation scheme, is in its ability to automatically distinguish the discontinuous and continuous parameters in the density func- tion, while further providing runtime checks of when discontinuity boundaries have been crossed. This enables the introduction of new inference engines that are able to exploit gra- dient information, while remaining efficient for models which are not everywhere differen- tiable. We demonstrate this ability by intro- ducing a discontinuous Hamiltonian Monte Carlo (DHMC) inference engine that is able to deliver automated and efficient inference for non-differentiable models. Our system is backed up by a mathematical formalism that ensures that any model expressed in this lan- guage has a density with a sufficiently low measure of discontinuities to maintain the validity of the inference engine.
@article{zhou2018lfppla,
title = {{A Low-Level Probabilistic Programming Language
for Non-Differentiable Models}},
author = {Zhou, Yuan and Gram-Hansen, Bradley and Kohn, Tobias and Rainforth, Tom and Yang, Hongseok and Wood, Frank},
year = {2019},
journal = {International Conference on Artificial Intelligence and Statistics (AISTATS)}
}
Y. Zhou
,
B. Gram-Hansen
,
T. Kohn
,
T. Rainforth
,
H. Yang
,
F. Wood
,
LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models, in The 22nd International Conference on Artificial Intelligence and Statistics, 2019, 148–157.
@inproceedings{zhou2019lf,
title = {LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models},
author = {Zhou, Yuan and Gram-Hansen, Bradley and Kohn, Tobias and Rainforth, Tom and Yang, Hongseok and Wood, Frank},
booktitle = {The 22nd International Conference on Artificial Intelligence and Statistics},
pages = {148--157},
year = {2019}
}
Y. Zhou
,
H. Yang
,
Y. W. Teh
,
T. Rainforth
,
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support, International Conference on Machine Learning (ICML, to appear), 2019.
@article{zhou2019divide,
title = {Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support},
author = {Zhou, Yuan and Yang, Hongseok and Teh, Yee Whye and Rainforth, Tom},
journal = {International Conference on Machine Learning (ICML, to appear)},
year = {2019}
}
2018
B. Gram-Hansen
,
Y. Zhou
,
T. Kohn
,
T. Rainforth
,
H. Yang
,
F. Wood
,
Hamiltonian Monte Carlo for Probabilistic Programs with Discontinuities, in International Conference on Probabilistic Programming, 2018.
@inproceedings{gram2018hamiltonian,
title = {Hamiltonian Monte Carlo for Probabilistic Programs with Discontinuities},
author = {Gram-Hansen, Bradley and Zhou, Yuan and Kohn, Tobias and Rainforth, Tom and Yang, Hongseok and Wood, Frank},
booktitle = {International Conference on Probabilistic Programming},
year = {2018}
}
T. Rainforth
,
Y. Zhou
,
X. Lu
,
Y. W. Teh
,
F. Wood
,
H. Yang
,
J. Meent
,
Inference Trees: Adaptive Inference with Exploration, arXiv preprint arXiv:1806.09550, 2018.
We introduce inference trees (ITs), a new class of inference methods that build
on ideas from Monte Carlo tree search to perform adaptive sampling in a manner
that balances exploration with exploitation, ensures consistency, and alleviates
pathologies in existing adaptive methods. ITs adaptively sample from hierarchical
partitions of the parameter space, while simultaneously learning these partitions
in an online manner. This enables ITs to not only identify regions of high posterior
mass, but also maintain uncertainty estimates to track regions where significant
posterior mass may have been missed. ITs can be based on any inference method
that provides a consistent estimate of the marginal likelihood. They are particularly
effective when combined with sequential Monte Carlo, where they capture long-range
dependencies and yield improvements beyond proposal adaptation alone.
@article{rainforth2018it,
title = {Inference Trees: Adaptive Inference with Exploration},
author = {Rainforth, Tom and Zhou, Yuan and Lu, Xiaoyu and Teh, Yee Whye and Wood, Frank and Yang, Hongseok and van de Meent, Jan-Willem},
journal = {arXiv preprint arXiv:1806.09550},
year = {2018}
}
X. Lu
,
T. Rainforth
,
Y. Zhou
,
J. Meent
,
Y. W. Teh
,
On Exploration, Exploitation and Learning in Adaptive Importance Sampling, arXiv preprint arXiv:1810.13296, 2018.
We study adaptive importance sampling (AIS) as an online learning problem and argue for the importance of the trade-off between exploration and exploitation in this adaptation. Borrowing ideas from the bandits literature, we propose Daisee, a partition-based AIS algorithm. We further introduce a notion of regret for AIS
and show that Daisee has O((log T)^(3/4) √T) cumulative pseudo-regret, where T is
the number of iterations. We then extend Daisee to adaptively learn a hierarchical partitioning of the sample space for more efficient sampling and confirm the performance of both algorithms empirically.
@article{lu2018exploration,
title = {{On Exploration, Exploitation and Learning in Adaptive Importance Sampling}},
author = {Lu, Xiaoyu and Rainforth, Tom and Zhou, Yuan and van de Meent, Jan-Willem and Teh, Yee Whye},
journal = {arXiv preprint arXiv:1810.13296},
year = {2018}
}
Software
2019
Y. Zhou
,
B. Gram-Hansen
,
T. Kohn
,
T. Rainforth
,
H. Yang
,
F. Wood
,
A Low-Level Probabilistic Programming Language for Non-Differentiable Models, International Conference on Artificial Intelligence and Statistics (AISTATS). 2019.
We develop a new Low-level, First-order Prob- abilistic Programming Language (LF-PPL) suited for models containing a mix of contin- uous, discrete, and/or piecewise-continuous variables. The key success of this language and its compilation scheme, is in its ability to automatically distinguish the discontinuous and continuous parameters in the density func- tion, while further providing runtime checks of when discontinuity boundaries have been crossed. This enables the introduction of new inference engines that are able to exploit gra- dient information, while remaining efficient for models which are not everywhere differen- tiable. We demonstrate this ability by intro- ducing a discontinuous Hamiltonian Monte Carlo (DHMC) inference engine that is able to deliver automated and efficient inference for non-differentiable models. Our system is backed up by a mathematical formalism that ensures that any model expressed in this lan- guage has a density with a sufficiently low measure of discontinuities to maintain the validity of the inference engine.
@software{zhou2018lfpplb,
title = {{A Low-Level Probabilistic Programming Language for Non-Differentiable Models}},
author = {Zhou, Yuan and Gram-Hansen, Bradley and Kohn, Tobias and Rainforth, Tom and Yang, Hongseok and Wood, Frank},
booktitle = {International Conference on Artificial Intelligence and Statistics (AISTATS)},
year = {2019},
bdsk-url-1 = {https://github.com/bradleygramhansen/PyLFPPL}
}