We outline an inherent weakness of tensor factorization models when latent factors are expressed as a function of side information and propose a novel method to mitigate this weakness. We coin our method Kernel Fried Tensor (KFT) and present it as a large scale forecasting tool for high dimensional data. Our results show superior performance against LightGBM and Field Aware Factorization Machines (FFM), two algorithms with proven track records widely used in industrial forecasting. We also develop a variational inference framework for KFT and associate our forecasts with calibrated uncertainty estimates on three large scale datasets. Furthermore, KFT is empirically shown to be robust against uninformative side information in terms of constants and Gaussian noise.
@article{HuNicSej2021,
title = {{{Large Scale Tensor Regression using Kernels and Variational Inference}}},
author = {Hu, R. and Nicholls, G. K. and Sejdinovic, D.},
journal = {Machine Learning},
year = {2021}
}
R. Hu
,
D. Sejdinovic
,
Robust Deep Interpretable Features for Binary Image Classification, in Proceedings of the Northern Lights Deep Learning Workshop, 2021, vol. 2.
The problem of interpretability for binary image classification is considered through the lens of kernel two-sample tests and generative modeling. A feature extraction framework coined Deep Interpretable Features is developed, which is used in combination with IntroVAE, a generative model capable of high-resolution image synthesis. Experimental results on a variety of datasets, including COVID-19 chest x-rays demonstrate the benefits of combining deep generative models with the ideas from kernel-based hypothesis testing in moving towards more robust interpretable deep generative models.
@inproceedings{HuSej2021,
author = {Hu, Robert and Sejdinovic, Dino},
title = {{{Robust Deep Interpretable Features for Binary Image Classification}}},
booktitle = {Proceedings of the Northern Lights Deep Learning Workshop},
year = {2021},
volume = {2},
doi = {10.7557/18.5708}
}