no code implementations • 4 Feb 2024 • Gaël Gendron, Bao Trung Nguyen, Alex Yuxuan Peng, Michael Witbrock, Gillian Dobbie
We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks.
1 code implementation • 21 Dec 2023 • Gaël Gendron, Yang Chen, Mitchell Rogers, Yiping Liu, Mihailo Azhar, Shahrokh Heidari, David Arturo Soriano Valdez, Kobe Knowles, Padriac O'Leary, Simon Eyre, Michael Witbrock, Gillian Dobbie, Jiamou Liu, Patrice Delmas
Better understanding the natural world is a crucial task with a wide range of applications.
no code implementations • 3 Dec 2023 • Vithya Yogarajan, Gillian Dobbie, Te Taka Keegan, Rostam J. Neuwirth
The importance and novelty of this survey are that it explores the perspective of under-represented societies.
no code implementations • 24 Oct 2023 • Xinglong Chang, Katharina Dost, Gillian Dobbie, Jörg Wicker
This paper presents a novel fully-agnostic framework, DIVA (Detecting InVisible Attacks), that detects attacks solely relying on analyzing the potentially poisoned data set.
no code implementations • 16 Oct 2023 • Xinglong Chang, Gillian Dobbie, Jörg Wicker
To demonstrate this risk is inherited in the adversary's objective, we propose FALFA (Fast Adversarial Label-Flipping Attack), a novel efficient attack for crafting adversarial labels.
no code implementations • 11 Sep 2023 • Vithya Yogarajan, Gillian Dobbie, Timothy Pistotti, Joshua Bensemann, Kobe Knowles
Recent advances in artificial intelligence, including the development of highly sophisticated large language models (LLM), have proven beneficial in many real-world applications.
1 code implementation • 31 May 2023 • Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie
We perform extensive evaluations of state-of-the-art LLMs, showing that they currently achieve very limited performance in contrast with other natural language tasks, even when applying techniques that have been shown to improve performance on other NLP tasks.
1 code implementation • 5 May 2023 • Kobe Knowles, Joshua Bensemann, Diana Benavides-Prado, Vithya Yogarajan, Michael Witbrock, Gillian Dobbie, Yang Chen
We introduce a novel architecture, the Neuromodulation Gated Transformer (NGT), which is a simple implementation of neuromodulation in transformers via a multiplicative effect.
no code implementations • 17 Apr 2023 • Vithya Yogarajan, Gillian Dobbie, Henry Gouk
An indigenous perspective on the effectiveness of debiasing techniques for pre-trained language models (PLMs) is presented in this paper.
1 code implementation • 2 Feb 2023 • Gaël Gendron, Michael Witbrock, Gillian Dobbie
Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders.
no code implementations • 1 Feb 2023 • Gaël Gendron, Michael Witbrock, Gillian Dobbie
Deep Learning models have shown success in a large variety of tasks by extracting correlation patterns from high-dimensional data but still struggle when generalizing out of their initial distribution.
1 code implementation • 13 Sep 2021 • Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Xuyun Zhang
However, existing MIAs ignore the source of a training member, i. e., the information of which client owns the training member, while it is essential to explore source privacy in FL beyond membership privacy of examples from all clients.
2 code implementations • 14 Mar 2021 • Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, Xuyun Zhang
In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.
no code implementations • 21 May 2019 • Robert Anderson, Yun Sing Koh, Gillian Dobbie, Albert Bifet
The novelty of ECPF is in how it uses similarity of classifications on new data, between a new classifier and existing classifiers, to quickly identify the best classifier to reuse.