OxCSML at ICML 2021
The group is participating in ICML 2021. Please feel free to stop by any of our poster sessions or presentations! We have 14 papers accepted to the main program of the conference:
- Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design by Adam Foster, Desi R. Ivanova, Ilyas Malik and Tom Rainforth
- Differentiable Particle Filtering via Entropy-Regularized Optimal Transport by Adrien Corenflos*, James Thornton*, George Deligiannidis, Arnaud Doucet
- Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding by Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani and Chris J. Maddison
- Provably Strict Generalisation Benefit for Equivariant Models by Bryn Elesedy and Sheheryar Zaidi
- Active Testing: Sample-Efficient Model Evaluation by Jannik Kossen, Sebastian Farquhar, Yarin Gal and Tom Rainforth
- Probabilistic Programs with Stochastic Conditioning by David Tolpin, Yuan Zhou, Tom Rainforth and Hongseok Yang
- Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning by Luisa Zintgraf, Leo Feng, Cong Lu, Maximilian Igl, Kristian Hartikainen, Katja Hofmann and Shimon Whiteson
- Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment) by Philip J. Ball*, Cong Lu*, Jack Parker-Holder and Stephen Roberts
- Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces by Xingchen Wan, Vu Nguyen, Huong Ha, Binxin Ru, Cong Lu, Michael A. Osborne
- Monte Carlo Variational Auto-Encoders by Achille Thin, Nikita Kotelevskii, Alain Durmus, Maxim Panov, Eric Moulines , Arnaud Doucet
- LieTransformer: Equivariant Self-Attention for Lie Groups by Michael Hutchinson*, Charline Le Lan*, Sheheryar Zaidi*, Emilien Dupont, Yee Whye Teh, Hyunjik Kim
- Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes by Peter Holderrieth, Michael Hutchinson, Yee Whye Teh
- Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections by Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris Holmes, Mert Gürbüzbalaban and Umut Şimşekli
- On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Process by Tim Rudner, Oscar Key, Yarin Gal and Tom Rainforth
In addition, Yee Whye Teh received the Test of Time Award for his 2011 paper with Max Welling
See here for a quick run down of each paper, plus the presentations and poster sessions for each.
Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design
Deep Adaptive Design (DAD) enables fast, adaptive experimentation. By learning a design network DAD removes the need for costly computations at each step of the experiment and makes decisions <1sec using a single forward pass.
- Long presentation in Bayesian Learning session 1: Thu 22 July 14:00 BST (6:00 a.m. PDT)
- Poster Session 5: Thu 22 July 16:00 - 19:00 BST (8 a.m. PDT - 11 a.m. PDT)
Differentiable Particle Filtering via Entropy-Regularized Optimal Transport
Leveraging regularized Optimal Transport for resampling enables end-to-end Differentiable Particle Filtering.
- Long presentation in Probabilistic Methods session 2: Thu 22 July 13:00 - 13:20 BST (5:00a.m. - 5:20a.m. PDT)
- Poster Session 5: Thu 22 July 16:00 - 19:00 BST (8 a.m. PDT - 11 a.m. PDT)
Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding
You want to compress data with a latent variable model, but bits-back achieves a suboptimal code length (neg. ELBO). We show how to break this barrier with asympt. optimal coders: Monte Carlo Bits-Back (McBits).
- Long presentation in Probabilistic Methods session 1: Wed 21 Jul 13:00 BST (5:00 a.m. PDT)
- Poster Session 3: Wed 21 Jul 16:00 — 19:00 BST (8:00 a.m. - 11:00 a.m. PDT)
Provably Strict Generalisation Benefit for Equivariant Models
The first strictly non-zero improvement in generalisation for equivariant models.
- Spotlight presentation in AutoML and Deep Architecture: Tue 20 Jul 14:00 — 15:00 BST (6:00 a.m. - 7:00 a.m. PDT)
- Poster Session 1: Tue 20 Jul 16:00 BST — 19:00 BST (8:00 a.m. - 11:00 a.m. PDT)
Active Testing: Sample-Efficient Model Evaluation
Active Testing: Sample-Efficient Model Evaluation
- Spotlight presentation in Probabilistic Methods session 3: Thur Jul 22 15:00 - 16:00 BST (7:00 a.m. - 8:00 a.m. PDT)
- Poster Session 5: Thu 22 July 16:00 - 19:00 BST (8 a.m. PDT - 11 a.m. PDT)
Probabilistic Programs with Stochastic Conditioning
We formalize and show how to condition programs on variables taking a particular distribution, rather than a fixed value.
- Spotlight presentation in Reinforcement Learning Theory session 3: Thu 22 Jul 01:00 BST — 02:00 BST (5:00 p.m. - 6:00 p.m. PDT)
- Poster Session 4: Thu 22 Jul 04:00 BST — 07:00 BST (Weds 8 p.m - 11 p.m. PDT)
Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
Effective meta-exploration for hard exploration tasks via reward bonuses in approximate hyper-state space.
- Spotlight presentation in Multi-task Learning 1: Fri 23 Jul 01:00 — 02:00 BST (Thu 5 p.m. PDT)
- Poster Session 6: Fri 23 Jul 05:00 — 08:00 BST (Thu 9 p.m. - midnight PDT)
Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Generalisation to novel environments from offline data on a single environment via a simple self-supervised context adaption algorithm.
- Spotlight presentation in Reinforcement Learning 5, Wed 21 Jul 02:00 BST — 03:00 BST (Tues 6 p.m. PDT)
- Poster Session 2: Wed 21 Jul 04:00 BST — 07:00 BST (Tues 8 p.m - 11 p.m. PDT)
Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces
Combining local optimisation with a tailored kernel design for effective Bayesian optimisation in high-dimensional mixed continuous and categorical search spaces.
- Spotlight presentation in AutoML: Wed 21 Jul 02:00 BST — 03:00 BST (Tues 6 p.m. PDT)
- Poster Session 2: Wed 21 Jul 04:00 BST — 07:00 BST (Tues 8 p.m - 11 p.m. PDT)
Monte Carlo Variational Auto-Encoders
We show how to obtain unbiased gradient estimates of tight ELBOs obtained using sophisticated evidence estimates such as Annealed Importance Sampling.
- Spotlight presentation in Algorithms 3: Fri 23 Jul 02:00 — 03:00 BST (Thu 6 p.m. PDT
- Poster Session 6: Fri 23 Jul 05:00 BST — 08:00 BST (Thu 9 p.m. - midnight PDT)
LieTransformer: Equivariant Self-Attention for Lie Groups
We propose a self-attention-based architecture that is equivariant to arbitrary Lie groups and their discrete sub-groups.
- Spotlight presentation in Deep Learning Algorithms session 4: Tue 20 Jul 15:00 — 16:00 BST (7 a.m. - 8 a.m. PDT)
- Poster Session 1: Tue 20 Jul 16:00 — 19:00 BST (8 a.m. - 11 a.m. PDT)
Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes
We study vector valued stochastic processes with Euclidean symmetris, and apply the results to Guassian and Neural processes.
- Spotlight presentation in Gaussian Processes 2: Fri 23 Jul 04:30 — 05:00 BST (8:30 p.m. - 9 p.m. PDT)
- Poster Session 1: Fri 23 Jul 05:00 — 08:00 BST (9 p.m. - 12 a.m. PDT)
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections
Injecting Gaussian noise = bias in SGD because of heavy-tailed asymmetric noise on gradients.
- Spotlight presentation in Deep Learning Theory session 4: Thu 22 Jul 01:00 — 02:00 BST (Weds 5 p.m. - 6 p.m. PDT)
- Poster Session 4: Thu 22 Jul 04:00 — 07:00 BST (8 p.m. - 11 p.m. PDT)
On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes
We show that importance-weighted VI for deep GPs can lead to arbitrarily poor gradient estimates and how to prevent this from happening.
- Spotlight presentation in Gaussian Processes 1: Thu 22 Jul 13:00 — 14:00 BST (5 a.m. - 6 a.m. PDT)
- Poster Session 5: 17:00 - 19:00 BST (9 a.m. - 11 a.m. PDT)
Bayesian Learning via Stochastic Gradient Langevin Dynamics
- Test of Time Award Thursday 8pm (PDT), Friday 4am (BST)