A final year DPhil student as part of the OxWaSP program, supervised by Professor Dino Sejdinovic and Dr Christopher Yau. My research interests lies in kernel methods, Gaussian process, and lately Bayesian optimisation.

Publications

2019

H. Law
,
P. Zhao
,
L. Chan
,
J. Huang
,
D. Sejdinovic
,
Hyperparameter Learning via Distributional Transfer, Advances in Neural Information Processing Systems (NeurIPS), to appear, 2019.

Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial ’exploration’ even in cases where potentially similar prior tasks have been solved. We propose to transfer information across tasks using kernel embeddings of distributions of training datasets used in those tasks. The resulting method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective.

@unpublished{LawZhaHuaSej2018,
author = {Law, H.C.L. and Zhao, P. and Chan, L. and Huang, J. and Sejdinovic, D.},
title = {{{Hyperparameter Learning via Distributional Transfer}}},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
pages = {to appear},
year = {2019}
}

A. Raj
,
H. Law
,
D. Sejdinovic
,
M. Park
,
A Differentially Private Kernel Two-Sample Test, in European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2019, to appear.

Kernel two-sample testing is a useful statistical tool in determining whether data samples arise from different distributions without imposing any parametric assumptions on those distributions. However, raw data samples can expose sensitive information about individuals who participate in scientific studies, which makes the current tests vulnerable to privacy breaches. Hence, we design a new framework for kernel two-sample testing conforming to differential privacy constraints, in order to guarantee the privacy of subjects in the data. Unlike existing differentially private parametric tests that simply add noise to data, kernel-based testing imposes a challenge due to a complex dependence of test statistics on the raw data, as these statistics correspond to estimators of distances between representations of probability measures in Hilbert spaces. Our approach considers finite dimensional approximations to those representations. As a result, a simple chi-squared test is obtained, where a test statistic depends on a mean and covariance of empirical differences between the samples, which we perturb for a privacy guarantee. We investigate the utility of our framework in two realistic settings and conclude that our method requires only a relatively modest increase in sample size to achieve a similar level of power to the non-private tests in both settings.

@inproceedings{RajLawSejPar2019,
author = {Raj, A. and Law, H.C.L. and Sejdinovic, D. and Park, M.},
title = {{{A Differentially Private Kernel Two-Sample Test}}},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)},
pages = {to appear},
year = {2019}
}

2018

H. Law
,
D. Sejdinovic
,
E. Cameron
,
T. Lucas
,
S. Flaxman
,
K. Battle
,
K. Fukumizu
,
Variational Learning on Aggregate Outputs with Gaussian Processes, in Advances in Neural Information Processing Systems (NeurIPS), 2018, to appear.

While a typical supervised learning framework assumes that the inputs and the outputs are measured at the same levels of granularity, many applications, including global mapping of disease, only have access to outputs at a much coarser level than that of the inputs. Aggregation of outputs makes generalization to new inputs much more difficult. We consider an approach to this problem based on variational learning with a model of output aggregation and Gaussian processes, where aggregation leads to intractability of the standard evidence lower bounds. We propose new bounds and tractable approximations, leading to improved prediction accuracy and scalability to large datasets, while explicitly taking uncertainty into account. We develop a framework which extends to several types of likelihoods, including the Poisson model for aggregated count data. We apply our framework to a challenging and important problem, the fine-scale spatial modelling of malaria incidence, with over 1 million observations.

@inproceedings{LawSejCamLucFlaBatFuk2018,
author = {Law, H.C.L. and Sejdinovic, D. and Cameron, E. and Lucas, T.C.D. and Flaxman, S. and Battle, K. and Fukumizu, K.},
title = {{{Variational Learning on Aggregate Outputs with Gaussian Processes}}},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
pages = {to appear},
year = {2018}
}

H. Law
,
D. Sutherland
,
D. Sejdinovic
,
S. Flaxman
,
Bayesian Approaches to Distribution Regression, in Artificial Intelligence and Statistics (AISTATS), 2018.

Distribution regression has recently attracted much interest as a generic solution to the problem of supervised learning where labels are available at the group level, rather than at the individual level. Current approaches, however, do not propagate the uncertainty in observations due to sampling variability in the groups. This effectively assumes that small and large groups are estimated equally well, and should have equal weight in the final regression. We account for this uncertainty with a Bayesian distribution regression formalism, improving the robustness and performance of the model when group sizes vary. We frame our models in a neural network style, allowing for simple MAP inference using backpropagation to learn the parameters, as well as MCMC-based inference which can fully propagate uncertainty. We demonstrate our approach on illustrative toy datasets, as well as on a challenging problem of predicting age from images.

@inproceedings{LawSutSejFla2018,
author = {Law, H.C.L. and Sutherland, D.J. and Sejdinovic, D. and Flaxman, S.},
title = {{Bayesian Approaches to Distribution Regression}},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
year = {2018}
}

2017

H. Law
,
C. Yau
,
D. Sejdinovic
,
Testing and Learning on Distributions with Symmetric Noise Invariance, in Advances in Neural Information Processing Systems (NeurIPS), 2017, 1343–1353.

Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting distance between distributions, are useful tools for fully nonparametric two-sample testing and learning on distributions. However, it is rarely that all possible differences between samples are of interest – discovered differences can be due to different types of measurement noise, data collection artefacts or other irrelevant sources of variability. We propose distances between distributions which encode invariance to additive symmetric noise, aimed at testing whether the assumed true underlying processes differ. Moreover, we construct invariant features of distributions, leading to learning algorithms robust to the impairment of the input distributions with symmetric additive noise. Such features lend themselves to a straightforward neural network implementation and can thus also be learned given a supervised signal.

@inproceedings{LawYauSej2017,
author = {Law, H.C.L. and Yau, C. and Sejdinovic, D.},
title = {{{Testing and Learning on Distributions with Symmetric Noise Invariance}}},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2017},
pages = {1343--1353}
}