no code implementations • 29 Feb 2024 • Karina Halevy, Anna Sotnikova, Badr AlKhamissi, Syrielle Montariol, Antoine Bosselut
We introduce a novel benchmark dataset, Seesaw-CF, for measuring bias-related harms of model editing and conduct the first in-depth investigation of how different weight-editing methods impact model bias.
1 code implementation • 20 Feb 2024 • Badr AlKhamissi, Muhammad ElNokrashy, Mai AlKhamissi, Mona Diab
The intricate relationship between language and culture has long been a subject of exploration within the realm of linguistic anthropology.
no code implementations • 17 Jan 2024 • Muhammad ElNokrashy, Badr AlKhamissi
In this light, we introduce Context-Contrastive Partial Diacritization (CCPD)--a novel approach to PD which integrates seamlessly with existing Arabic diacritization systems.
no code implementations • 1 Dec 2023 • Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, Antoine Bosselut
To identify the factors underlying LLM-brain alignment, we compute correlations between the brain alignment of LLMs and various model properties, such as model size, various problem-solving abilities, and performance on tasks requiring world knowledge spanning various domains.
no code implementations • 24 Oct 2023 • Ahmed ElBakry, Mohamed Gabr, Muhammad ElNokrashy, Badr AlKhamissi
This task focuses on deriving a vector representation of an Arabic word from its accompanying description.
1 code implementation • 28 Jun 2023 • Zaid Alyafeai, Maged S. Alshaibani, Badr AlKhamissi, Hamzah Luqman, Ebrahim Alareqi, Ali Fadel
Large language models (LLMs) have demonstrated impressive performance on various downstream tasks without requiring fine-tuning, including ChatGPT, a chat-based model built on top of LLMs such as GPT-3. 5 and GPT-4.
no code implementations • 19 May 2023 • Badr AlKhamissi, Siddharth Verma, Ping Yu, Zhijing Jin, Asli Celikyilmaz, Mona Diab
Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations.
no code implementations • 30 Sep 2022 • Muhammad ElNokrashy, Badr AlKhamissi, Mona Diab
To test this, we propose a new layer fusion method: Depth-Wise Attention (DWAtt), to help re-surface signals from non-final layers.
no code implementations • 25 May 2022 • Badr AlKhamissi, Faisal Ladhak, Srini Iyer, Ves Stoyanov, Zornitsa Kozareva, Xian Li, Pascale Fung, Lambert Mathias, Asli Celikyilmaz, Mona Diab
Hate speech detection is complex; it relies on commonsense reasoning, knowledge of stereotypes, and an understanding of social nuance that differs from one culture to the next.
Cultural Vocal Bursts Intensity Prediction Few-Shot Learning +1
no code implementations • OSACT (LREC) 2022 • Badr AlKhamissi, Mona Diab
The tasks are to predict if a tweet contains (1) Offensive language; and whether it is considered (2) Hate Speech or not and if so, then predict the (3) Fine-Grained Hate Speech label from one of six categories.
no code implementations • 12 Apr 2022 • Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, Marjan Ghazvininejad
Recently, there has been a surge of interest in the NLP community on the use of pretrained Language Models (LMs) as Knowledge Bases (KBs).
1 code implementation • 14 Dec 2021 • Badr AlKhamissi, Akshay Srinivasan, Zeb-Kurth Nelson, Sam Ritter
Alchemy is a new meta-learning environment rich enough to contain interesting abstractions, yet simple enough to make fine-grained analysis tractable.
no code implementations • 16 Sep 2021 • Badr AlKhamissi, Muhammad ElNokrashy, David Bernal-Casas
In this work, we explore a new Spiking Neural Network (SNN) formulation with Resonate-and-Fire (RAF) neurons (Izhikevich, 2001) trained with gradient descent via back-propagation.
no code implementations • ICLR Workshop Learning_to_Learn 2021 • Badr AlKhamissi, Muhammad ElNokrashy, Michael Spranger
In this work, we analyze the reinstatement mechanism introduced by Ritter et al. (2018) to reveal two classes of neurons that emerge in the agent's working memory (an epLSTM cell) when trained using episodic meta-RL on an episodic variant of the Harlow visual fixation task.
1 code implementation • EACL (WANLP) 2021 • Badr AlKhamissi, Mohamed Gabr, Muhammad ElNokrashy, Khaled Essam
Tasks are to identify the geographic origin of short Dialectal (DA) and Modern Standard Arabic (MSA) utterances at the levels of both country and province.
1 code implementation • COLING (WANLP) 2020 • Badr AlKhamissi, Muhammad N. ElNokrashy, Mohamed Gabr
We propose a novel architecture for labelling character sequences that achieves state-of-the-art results on the Tashkeela Arabic diacritization benchmark.
Ranked #3 on Arabic Text Diacritization on Tashkeela