1 code implementation • 11 Oct 2023 • Lindsay Sanneman, Mycal Tucker, Julie Shah
This empirical link between human factors and information-theoretic concepts provides an important mathematical characterization of the workload-understanding tradeoff which enables user-tailored XAI design.
no code implementations • 30 Nov 2022 • Seth Karten, Mycal Tucker, Siva Kailas, Katia Sycara
We evaluate the learned communication `language' through direct causal analysis of messages in non-sparse runs to determine the range of lossless sparse budgets, which allow zero-shot sparsity, and the range of sparse budgets that will inquire a reward loss, which is minimized by our learned gating function with few-shot sparsity.
no code implementations • 30 Jun 2022 • Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky
Emergent communication research often focuses on optimizing task-specific utility as a driver for communication.
1 code implementation • 27 May 2022 • Mycal Tucker, Julie Shah
Artificial neural nets can represent and classify many types of data but are often tailored to particular applications -- e. g., for "fair" or "hierarchical" classification.
1 code implementation • NAACL 2022 • Mycal Tucker, Tiwalayo Eisape, Peng Qian, Roger Levy, Julie Shah
Recent causal probing literature reveals when language models and syntactic probes use similar representations.
no code implementations • 26 Jan 2022 • Mycal Tucker, William Kuhl, Khizer Shahid, Seth Karten, Katia Sycara, Julie Shah
Neural nets are powerful function approximators, but the behavior of a given neural net, once trained, cannot be easily modified.
no code implementations • 19 Jan 2022 • Seth Karten, Mycal Tucker, Huao Li, Siva Kailas, Michael Lewis, Katia Sycara
In human-agent teams tested in benchmark environments, where agents have been modeled using the Enforcers, we find that a prototype-based method produces meaningful discrete tokens that enable human partners to learn agent communication faster and better than a one-hot baseline.
no code implementations • NeurIPS 2021 • Mycal Tucker, Huao Li, Siddharth Agrawal, Dana Hughes, Katia Sycara, Michael Lewis, Julie Shah
Neural agents trained in reinforcement learning settings can learn to communicate among themselves via discrete tokens, accomplishing as a team what agents would be unable to do alone.
1 code implementation • 28 May 2021 • Mycal Tucker, Peng Qian, Roger Levy
Neural language models exhibit impressive performance on a variety of tasks, but their internal reasoning may be difficult to understand.
no code implementations • 16 Jan 2020 • Mycal Tucker, Yilun Zhou, Julie Shah
Robotic agents must adopt existing social conventions in order to be effective teammates.