1 code implementation • 7 Apr 2021 • Nicholas Guttenberg
In this paper, we wish to investigate the dynamics of information transfer in evolutionary dynamics.
1 code implementation • 7 Apr 2021 • Nicholas Guttenberg, Marek Rosa
We create an artificial system of agents (attention-based neural networks) which selectively exchange messages with each-other in order to study the emergence of memetic evolution and how memetic evolutionary pressures interact with genetic evolution of the network weights.
1 code implementation • 3 Dec 2019 • Marek Rosa, Olga Afanasjeva, Simon Andersson, Joseph Davidson, Nicholas Guttenberg, Petr Hlubuček, Martin Poliak, Jaroslav Vítku, Jan Feyereisl
In this work, we propose a novel memory-based multi-agent meta-learning architecture and learning procedure that allows for learning of a shared communication policy that enables the emergence of rapid adaptation to new and unseen environments by learning to learn learning algorithms through communication.
1 code implementation • 25 Nov 2019 • Elizabeth J. Tasker, Matthieu Laneuville, Nicholas Guttenberg
Here, we demonstrate the use of a neural network that models the density of planets in a space of six properties that is then used to impute a probability distribution for missing values.
Earth and Planetary Astrophysics Instrumentation and Methods for Astrophysics
no code implementations • 15 Aug 2019 • Jennifer F. Hoyal Cuthill, Nicholas Guttenberg, Sophie Ledger, Robyn Crowther, Blanca Huertas
Traditional anatomical analyses captured only a fraction of real phenomic information.
1 code implementation • 8 Feb 2019 • Nicholas Guttenberg
We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean.
no code implementations • 12 Dec 2018 • Nicholas Guttenberg, Nathaniel Virgo, Alexandra Penn
In this paper, we hope to bridge that gap by reviewing common barriers to open-endedness in the evolution-inspired approach and how they are dealt with in the evolutionary case - collapse of diversity, saturation of complexity, and failure to form new kinds of individuality.
1 code implementation • 1 Jun 2018 • Nicholas Guttenberg, Martin Biehl, Nathaniel Virgo, Ryota Kanai
We investigate the use of attentional neural network layers in order to learn a `behavior characterization' which can be used to drive novelty search and curiosity-based policies.
1 code implementation • 30 Mar 2018 • Nicholas Guttenberg, Ryota Kanai
We train a network to generate mappings between training sets and classification policies (a 'classifier generator') by conditioning on the entire training set via an attentional mechanism.
1 code implementation • 15 Aug 2017 • Nicholas Guttenberg, Martin Biehl, Ryota Kanai
Controlling embodied agents with many actuated degrees of freedom is a challenging task.
2 code implementations • 22 Feb 2017 • Nicholas Guttenberg, Yen Yu, Ryota Kanai
In this method, the problem of action selection is reduced to one of gradient descent on the latent space of the generative model, with the model itself providing the means of evaluating outcomes and finding the gradient, much like how the reward network in Deep Q-Networks (DQN) provides gradient information for the action generator.
2 code implementations • 14 Dec 2016 • Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, Ryota Kanai
The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain.
no code implementations • 1 Sep 2016 • Nicholas Guttenberg, Martin Biehl, Ryota Kanai
We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them.