The group is participating in ICML 2021. Please feel free to stop by any of our poster sessions or presentations! We have 14 papers accepted to the main program of the conference:

In addition, Yee Whye Teh received the Test of Time Award for his 2011 paper with Max Welling

See here for a quick run down of each paper, plus the presentations and poster sessions for each.

Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design

Deep Adaptive Design (DAD) enables fast, adaptive experimentation. By learning a design network DAD removes the need for costly computations at each step of the experiment and makes decisions <1sec using a single forward pass.

  • Long presentation in Bayesian Learning session 1: Thu 22 July 14:00 BST (6:00 a.m. PDT)
  • Poster Session 5: Thu 22 July 16:00 - 19:00 BST (8 a.m. PDT - 11 a.m. PDT)

Paper and code available!

Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

Leveraging regularized Optimal Transport for resampling enables end-to-end Differentiable Particle Filtering.

  • Long presentation in Probabilistic Methods session 2: Thu 22 July 13:00 - 13:20 BST (5:00a.m. - 5:20a.m. PDT)
  • Poster Session 5: Thu 22 July 16:00 - 19:00 BST (8 a.m. PDT - 11 a.m. PDT)

Paper and code available!

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

You want to compress data with a latent variable model, but bits-back achieves a suboptimal code length (neg. ELBO). We show how to break this barrier with asympt. optimal coders: Monte Carlo Bits-Back (McBits).

  • Long presentation in Probabilistic Methods session 1: Wed 21 Jul 13:00 BST (5:00 a.m. PDT)
  • Poster Session 3: Wed 21 Jul 16:00 — 19:00 BST (8:00 a.m. - 11:00 a.m. PDT)

Paper and code available!

Provably Strict Generalisation Benefit for Equivariant Models

The first strictly non-zero improvement in generalisation for equivariant models.

  • Spotlight presentation in AutoML and Deep Architecture: Tue 20 Jul 14:00 — 15:00 BST (6:00 a.m. - 7:00 a.m. PDT)
  • Poster Session 1: Tue 20 Jul 16:00 BST — 19:00 BST (8:00 a.m. - 11:00 a.m. PDT)

Paper

Active Testing: Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation

  • Spotlight presentation in Probabilistic Methods session 3: Thur Jul 22 15:00 - 16:00 BST (7:00 a.m. - 8:00 a.m. PDT)
  • Poster Session 5: Thu 22 July 16:00 - 19:00 BST (8 a.m. PDT - 11 a.m. PDT)

Paper and code available!

Probabilistic Programs with Stochastic Conditioning

We formalize and show how to condition programs on variables taking a particular distribution, rather than a fixed value.

  • Spotlight presentation in Reinforcement Learning Theory session 3: Thu 22 Jul 01:00 BST — 02:00 BST (5:00 p.m. - 6:00 p.m. PDT)
  • Poster Session 4: Thu 22 Jul 04:00 BST — 07:00 BST (Weds 8 p.m - 11 p.m. PDT)

Paper

Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

Effective meta-exploration for hard exploration tasks via reward bonuses in approximate hyper-state space.

  • Spotlight presentation in Multi-task Learning 1: Fri 23 Jul 01:00 — 02:00 BST (Thu 5 p.m. PDT)
  • Poster Session 6: Fri 23 Jul 05:00 — 08:00 BST (Thu 9 p.m. - midnight PDT)

Paper

Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment

Generalisation to novel environments from offline data on a single environment via a simple self-supervised context adaption algorithm.

  • Spotlight presentation in Reinforcement Learning 5, Wed 21 Jul 02:00 BST — 03:00 BST (Tues 6 p.m. PDT)
  • Poster Session 2: Wed 21 Jul 04:00 BST — 07:00 BST (Tues 8 p.m - 11 p.m. PDT)

Paper

Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces

Combining local optimisation with a tailored kernel design for effective Bayesian optimisation in high-dimensional mixed continuous and categorical search spaces.

  • Spotlight presentation in AutoML: Wed 21 Jul 02:00 BST — 03:00 BST (Tues 6 p.m. PDT)
  • Poster Session 2: Wed 21 Jul 04:00 BST — 07:00 BST (Tues 8 p.m - 11 p.m. PDT)

Paper

Monte Carlo Variational Auto-Encoders

We show how to obtain unbiased gradient estimates of tight ELBOs obtained using sophisticated evidence estimates such as Annealed Importance Sampling.

  • Spotlight presentation in Algorithms 3: Fri 23 Jul 02:00 — 03:00 BST (Thu 6 p.m. PDT
  • Poster Session 6: Fri 23 Jul 05:00 BST — 08:00 BST (Thu 9 p.m. - midnight PDT)

Paper

LieTransformer: Equivariant Self-Attention for Lie Groups

We propose a self-attention-based architecture that is equivariant to arbitrary Lie groups and their discrete sub-groups.

  • Spotlight presentation in Deep Learning Algorithms session 4: Tue 20 Jul 15:00 — 16:00 BST (7 a.m. - 8 a.m. PDT)
  • Poster Session 1: Tue 20 Jul 16:00 — 19:00 BST (8 a.m. - 11 a.m. PDT)

Paper and code available!

Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes

We study vector valued stochastic processes with Euclidean symmetris, and apply the results to Guassian and Neural processes.

  • Spotlight presentation in Gaussian Processes 2: Fri 23 Jul 04:30 — 05:00 BST (8:30 p.m. - 9 p.m. PDT)
  • Poster Session 1: Fri 23 Jul 05:00 — 08:00 BST (9 p.m. - 12 a.m. PDT)

Paper and code available!

Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections

Injecting Gaussian noise = bias in SGD because of heavy-tailed asymmetric noise on gradients.

  • Spotlight presentation in Deep Learning Theory session 4: Thu 22 Jul 01:00 — 02:00 BST (Weds 5 p.m. - 6 p.m. PDT)
  • Poster Session 4: Thu 22 Jul 04:00 — 07:00 BST (8 p.m. - 11 p.m. PDT)

Paper

On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes

We show that importance-weighted VI for deep GPs can lead to arbitrarily poor gradient estimates and how to prevent this from happening.

  • Spotlight presentation in Gaussian Processes 1: Thu 22 Jul 13:00 — 14:00 BST (5 a.m. - 6 a.m. PDT)
  • Poster Session 5: 17:00 - 19:00 BST (9 a.m. - 11 a.m. PDT)

Paper and code available!

Bayesian Learning via Stochastic Gradient Langevin Dynamics