Search Results for author: Judith Rousseau

Found 10 papers, 2 papers with code

Scalable and adaptive variational Bayes methods for Hawkes processes

no code implementations1 Dec 2022 Deborah Sulem, Vincent Rivoirard, Judith Rousseau

Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions.

Data Augmentation Point Processes

Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement

1 code implementation18 Mar 2022 Cian Naik, Judith Rousseau, Trevor Campbell

Bayesian coresets approximate a posterior distribution by building a small weighted subset of the data points.

The Curse of Depth in Kernel Regime

no code implementations NeurIPS Workshop ICBINB 2021 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).

Stable ResNet

no code implementations24 Oct 2020 Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau

Deep ResNet architectures have achieved state of the art performance on many tasks.

Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks

no code implementations31 May 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent in parameter space is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).

On the Impact of the Activation Function on Deep Neural Networks Training

no code implementations19 Feb 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure.

On the Selection of Initialization and Activation Function for Deep Neural Networks

no code implementations ICLR 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos.

Bayesian Nonparametrics for Sparse Dynamic Networks

no code implementations6 Jul 2016 Cian Naik, Francois Caron, Judith Rousseau, Yee Whye Teh, Konstantina Palla

In this paper we propose a Bayesian nonparametric approach to modelling sparse time-varying networks.

Bayesian matrix completion: prior specification

no code implementations5 Jun 2014 Pierre Alquier, Vincent Cottet, Nicolas Chopin, Judith Rousseau

While the behaviour of algorithms based on nuclear norm minimization is now well understood, an as yet unexplored avenue of research is the behaviour of Bayesian algorithms in this context.

Matrix Completion Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.