no code implementations • 20 Jan 2024 • Sami Alabed, Daniel Belov, Bart Chrzaszcz, Juliana Franco, Dominik Grewe, Dougal Maclaurin, James Molloy, Tom Natan, Tamara Norman, Xiaoyue Pan, Adam Paszke, Norman A. Rink, Michael Schaarschmidt, Timur Sitdikov, Agnieszka Swietlik, Dimitrios Vytiniotis, Joel Wee
Training of modern large neural networks (NN) requires a combination of parallelization strategies encompassing data, model, or optimizer sharding.
no code implementations • 20 May 2021 • Roy Frostig, Matthew J. Johnson, Dougal Maclaurin, Adam Paszke, Alexey Radul
We decompose reverse-mode automatic differentiation into (forward-mode) linearization followed by transposition.
no code implementations • 23 Oct 2019 • Alexey Radul, Brian Patton, Dougal Maclaurin, Matthew D. Hoffman, Rif A. Saurous
We present a general approach to batching arbitrary computations for accelerators such as GPUs.
8 code implementations • NeurIPS 2015 • David Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gómez-Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, Ryan P. Adams
We introduce a convolutional neural network that operates directly on graphs.
Ranked #2 on Drug Discovery on HIV dataset
1 code implementation • 6 Apr 2015 • Dougal Maclaurin, David Duvenaud, Ryan P. Adams
By tracking the change in entropy over this sequence of transformations during optimization, we form a scalable, unbiased estimate of the variational lower bound on the log marginal likelihood.
2 code implementations • 11 Feb 2015 • Dougal Maclaurin, David Duvenaud, Ryan P. Adams
Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable.
no code implementations • 22 Mar 2014 • Dougal Maclaurin, Ryan P. Adams
Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose tool for Bayesian inference.