Search Results for author: Jonathan Lawry

Found 15 papers, 1 papers with code

Learning Interpretable Models of Aircraft Handling Behaviour by Reinforcement Learning from Human Feedback

no code implementations26 May 2023 Tom Bewley, Jonathan Lawry, Arthur Richards

We propose a method to capture the handling abilities of fast jet pilots in a software model via reinforcement learning (RL) from human preference feedback.

Reinforcement Learning (RL)

Two-step counterfactual generation for OOD examples

no code implementations10 Feb 2023 Nawid Keshtmand, Raul Santos-Rodriguez, Jonathan Lawry

Two fundamental requirements for the deployment of machine learning models in safety-critical systems are to be able to detect out-of-distribution (OOD) data correctly and to be able to explain the prediction of the model.

counterfactual Out of Distribution (OOD) Detection +1

Reward Learning with Trees: Methods and Evaluation

no code implementations3 Oct 2022 Tom Bewley, Jonathan Lawry, Arthur Richards, Rachel Craddock, Ian Henderson

Recent efforts to learn reward functions from human feedback have tended to use deep neural networks, whose lack of transparency hampers our ability to explain agent behaviour or verify alignment.

Summarising and Comparing Agent Dynamics with Contrastive Spatiotemporal Abstraction

no code implementations17 Jan 2022 Tom Bewley, Jonathan Lawry, Arthur Richards

We introduce a data-driven, model-agnostic technique for generating a human-interpretable summary of the salient points of contrast within an evolving dynamical system, such as the learning process of a control agent.

reinforcement-learning Reinforcement Learning (RL)

The Impact of Network Connectivity on Collective Learning

no code implementations1 Jun 2021 Michael Crosscombe, Jonathan Lawry

Models for collective behaviours may often rely upon the assumption of total connectivity between agents to provide effective information sharing within the system, but this assumption may be ill-advised.

Decision Making

TripleTree: A Versatile Interpretable Representation of Black Box Agents and their Environments

1 code implementation10 Sep 2020 Tom Bewley, Jonathan Lawry

In explainable artificial intelligence, there is increasing interest in understanding the behaviour of autonomous agents to build trust and validate performance.

Explainable artificial intelligence reinforcement-learning +1

Modelling Agent Policies with Interpretable Imitation Learning

no code implementations19 Jun 2020 Tom Bewley, Jonathan Lawry, Arthur Richards

As we deploy autonomous agents in safety-critical domains, it becomes important to develop an understanding of their internal mechanisms and representations.

Imitation Learning

Evidence Propagation and Consensus Formation in Noisy Environments

no code implementations13 May 2019 Michael Crosscombe, Jonathan Lawry, Palina Bartashevich

Results show that a combination of updating on direct evidence and belief combination between agents results in better consensus to the best state than does evidence updating alone.

A Model of Multi-Agent Consensus for Vague and Uncertain Beliefs

no code implementations11 Dec 2016 Michael Crosscombe, Jonathan Lawry

Finally, if agent interactions are guided by belief quality measured as similarity to the true state of the world, then applying the consensus operator alone results in the population converging to a high quality shared belief.

Exploiting Vagueness for Multi-Agent Consensus

no code implementations19 Jul 2016 Michael Crosscombe, Jonathan Lawry

A framework for consensus modelling is introduced using Kleene's three valued logic as a means to express vagueness in agents' beliefs.

The Utility of Hedged Assertions in the Emergence of Shared Categorical Labels

no code implementations25 Jan 2016 Martha Lewis, Jonathan Lawry

Results show that using hedged assertions positively affects the emergence of shared categories in two distinct ways.

Concept Generation in Language Evolution

no code implementations25 Jan 2016 Martha Lewis, Jonathan Lawry

This thesis investigates the generation of new concepts from combinations of existing concepts as a language evolves.

A Label Semantics Approach to Linguistic Hedges

no code implementations25 Jan 2016 Martha Lewis, Jonathan Lawry

We introduce a model for the linguistic hedges `very' and `quite' within the label semantics framework, and combined with the prototype and conceptual spaces theories of concepts.

Emerging Dimension Weights in a Conceptual Spaces Model of Concept Combination

no code implementations25 Jan 2016 Martha Lewis, Jonathan Lawry

The expected value and the variance of these weights across agents may be predicted from the distribution of elements in the conceptual space, as determined by the underlying environment, together with the rate at which agents adopt others' concepts.

Cannot find the paper you are looking for? You can Submit a new open access paper.