Search Results for author: Nicholas Guttenberg

Found 13 papers, 10 papers with code

Evolutionary rates of information gain and decay in fluctuating environments

1 code implementation7 Apr 2021 Nicholas Guttenberg

In this paper, we wish to investigate the dynamics of information transfer in evolutionary dynamics.

Bootstrapping of memetic from genetic evolution via inter-agent selection pressures

1 code implementation7 Apr 2021 Nicholas Guttenberg, Marek Rosa

We create an artificial system of agents (attention-based neural networks) which selectively exchange messages with each-other in order to study the emergence of memetic evolution and how memetic evolutionary pressures interact with genetic evolution of the network weights.

BADGER: Learning to (Learn [Learning Algorithms] through Multi-Agent Communication)

1 code implementation3 Dec 2019 Marek Rosa, Olga Afanasjeva, Simon Andersson, Joseph Davidson, Nicholas Guttenberg, Petr Hlubuček, Martin Poliak, Jaroslav Vítku, Jan Feyereisl

In this work, we propose a novel memory-based multi-agent meta-learning architecture and learning procedure that allows for learning of a shared communication policy that enables the emergence of rapid adaptation to new and unseen environments by learning to learn learning algorithms through communication.

Meta-Learning

Estimating Planetary Mass with Deep Learning

1 code implementation25 Nov 2019 Elizabeth J. Tasker, Matthieu Laneuville, Nicholas Guttenberg

Here, we demonstrate the use of a neural network that models the density of planets in a space of six properties that is then used to impute a probability distribution for missing values.

Earth and Planetary Astrophysics Instrumentation and Methods for Astrophysics

Generating the support with extreme value losses

1 code implementation8 Feb 2019 Nicholas Guttenberg

We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean.

On the potential for open-endedness in neural networks

no code implementations12 Dec 2018 Nicholas Guttenberg, Nathaniel Virgo, Alexandra Penn

In this paper, we hope to bridge that gap by reviewing common barriers to open-endedness in the evolution-inspired approach and how they are dealt with in the evolutionary case - collapse of diversity, saturation of complexity, and failure to form new kinds of individuality.

BIG-bench Machine Learning

Being curious about the answers to questions: novelty search with learned attention

1 code implementation1 Jun 2018 Nicholas Guttenberg, Martin Biehl, Nathaniel Virgo, Ryota Kanai

We investigate the use of attentional neural network layers in order to learn a `behavior characterization' which can be used to drive novelty search and curiosity-based policies.

Learning to generate classifiers

1 code implementation30 Mar 2018 Nicholas Guttenberg, Ryota Kanai

We train a network to generate mappings between training sets and classification policies (a 'classifier generator') by conditioning on the entire training set via an attentional mechanism.

General Classification

Learning body-affordances to simplify action spaces

1 code implementation15 Aug 2017 Nicholas Guttenberg, Martin Biehl, Ryota Kanai

Controlling embodied agents with many actuated degrees of freedom is a challenging task.

Counterfactual Control for Free from Generative Models

2 code implementations22 Feb 2017 Nicholas Guttenberg, Yen Yu, Ryota Kanai

In this method, the problem of action selection is reduced to one of gradient descent on the latent space of the generative model, with the model itself providing the means of evaluating outcomes and finding the gradient, much like how the reward network in Deep Q-Networks (DQN) provides gradient information for the action generator.

counterfactual

Permutation-equivariant neural networks applied to dynamics prediction

2 code implementations14 Dec 2016 Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, Ryota Kanai

The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain.

Translation

Neural Coarse-Graining: Extracting slowly-varying latent degrees of freedom with neural networks

no code implementations1 Sep 2016 Nicholas Guttenberg, Martin Biehl, Ryota Kanai

We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.