1 code implementation • 1 Apr 2024 • Shaina Raza, Oluwanifemi Bamgbose, Shardul Ghuge, Fatemeh Tavakoli, Deepak John Reji
We introduce Safe and Responsible Large Language Model (SR$_{\text{LLM}}$) , a model designed to enhance the safety of language generation using LLMs.
no code implementations • 14 Mar 2024 • Shaina Raza, Tahniat Khan, Veronica Chatrath, Drai Paulen-Patterson, Mizanur Rahman, Oluwanifemi Bamgbose
Our objective is to provide the research community with adaptable and precise classification models adept at identifying fake news for the elections agenda.
no code implementations • 27 Nov 2023 • Tahniat Khan, Mizanur Rahman, Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza
We have created a novel dataset of North American election-related news articles through a blend of advanced language models (LMs) and thorough human verification, for precision and relevance.
no code implementations • 20 Oct 2023 • Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza
Additionally, implementing a test suite such as ours lowers the environmental overhead of making models safe and fair.
no code implementations • 30 Sep 2023 • Shaina Raza, Oluwanifemi Bamgbose, Veronica Chatrath, Shardul Ghuge, Yan Sidyakin, Abdullah Y Muaad
Bias detection in text is crucial for combating the spread of negative stereotypes, misinformation, and biased decision-making.