1 code implementation • 9 Aug 2023 • Fahim Dalvi, Maram Hasanain, Sabri Boughorbel, Basel Mousi, Samir Abdaljalil, Nizi Nazar, Ahmed Abdelali, Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Ali, Majd Hawasly, Nadir Durrani, Firoj Alam
In this study, we introduce the LLMeBench framework, which can be seamlessly customized to evaluate LLMs for any NLP task, regardless of language.
no code implementations • 24 May 2023 • Ahmed Abdelali, Hamdy Mubarak, Shammur Absar Chowdhury, Maram Hasanain, Basel Mousi, Sabri Boughorbel, Yassine El Kheir, Daniel Izham, Fahim Dalvi, Majd Hawasly, Nizi Nazar, Yousseif Elshahawy, Ahmed Ali, Nadir Durrani, Natasa Milic-Frayling, Firoj Alam
Our findings provide valuable insights into the applicability of LLMs for Arabic NLP and speech processing tasks.
no code implementations • 22 May 2023 • Basel Mousi, Nadir Durrani, Fahim Dalvi
We propose using a large language model, ChatGPT, as an annotator to enable fine-grained interpretation analysis of pre-trained language models.
no code implementations • 13 Oct 2022 • Julia El Zini, Mohamad Mansour, Basel Mousi, Mariette Awad
In this work, inspired by offline information retrieval, we propose different metrics and techniques to evaluate the explainability of SA models from two angles.