1 code implementation • 31 Aug 2020 • Shuyu Lin, Ronald Clark
In this paper, we show that the performance of a learnt generative model is closely related to the model's ability to accurately represent the inferred \textbf{latent data distribution}, i. e. its topology and structural properties.
2 code implementations • IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020 • Shuyu Lin, Ronald Clark, Robert Birke, Sandro Schönborn, Niki Trigoni, Stephen Roberts
In this work, we propose a VAE-LSTM hybrid model as an unsupervised approach for anomaly detection in time series.
no code implementations • 9 Sep 2019 • Shuyu Lin, Stephen Roberts, Niki Trigoni, Ronald Clark
A trade-off exists between reconstruction quality and the prior regularisation in the Evidence Lower Bound (ELBO) loss that Variational Autoencoder (VAE) models use for learning.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Shuyu Lin, Ronald Clark, Robert Birke, Niki Trigoni, Stephen Roberts
In this paper, we present a new generative model for learning latent embeddings.
no code implementations • 16 Feb 2019 • Shuyu Lin, Ronald Clark, Robert Birke, Niki Trigoni, Stephen Roberts
Variational Auto-encoders (VAEs) have been very successful as methods for forming compressed latent representations of complex, often high-dimensional, data.