no code implementations • 5 Sep 2022 • Patrick Cannon, Daniel Ward, Sebastian M. Schmon
In this work, we provide the first comprehensive study of the behaviour of neural SBI algorithms in the presence of various forms of model misspecification.
no code implementations • 8 Jul 2022 • Jordan Langham-Lopez, Sebastian M. Schmon, Patrick Cannon
Multi-agent reinforcement learning experiments and open-source training environments are typically limited in scale, supporting tens or sometimes up to hundreds of interacting agents.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • CVPR 2022 • Julian Wyatt, Adam Leach, Sebastian M. Schmon, Chris G. Willcocks
A secondary problem is that Gaussian diffusion fails to capture larger anomalies; therefore we develop a multi-scale simplex noise diffusion process that gives control over the target anomaly size.
Ranked #21 on Anomaly Detection on VisA
no code implementations • 15 Jun 2022 • Joel Dyer, Patrick Cannon, J. Doyne Farmer, Sebastian M. Schmon
Calibrating agent-based models (ABMs) to data is among the most fundamental requirements to ensure the model fulfils its desired purpose.
1 code implementation • ICLR 2022 • Tom Joy, Yuge Shi, Philip H. S. Torr, Tom Rainforth, Sebastian M. Schmon, N. Siddharth
Here we introduce a novel alternative, the MEME, that avoids such explicit combinations by repurposing semi-supervised VAEs to combine information between modalities implicitly through mutual supervision.
2 code implementations • ICLR 2021 • Tom Joy, Sebastian M. Schmon, Philip H. S. Torr, N. Siddharth, Tom Rainforth
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels.
1 code implementation • 8 Jun 2020 • Stefan Groha, Sebastian M. Schmon, Alexander Gusev
We show that our model exhibits state-of-the-art performance on popular survival data sets and demonstrate its efficacy in a multi-state setting
no code implementations • 2 Dec 2019 • Jack K. Fitzsimons, Sebastian M. Schmon, Stephen J. Roberts
Bayesian interpretations of neural network have a long history, dating back to early work in the 1990's and have recently regained attention because of their desirable properties like uncertainty estimation, model robustness and regularisation.
no code implementations • 3 Mar 2019 • Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis
When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.