Search Results for author: Oluwanifemi Bamgbose

Found 5 papers, 1 papers with code

Safe and Responsible Large Language Model Development

1 code implementation1 Apr 2024 Shaina Raza, Oluwanifemi Bamgbose, Shardul Ghuge, Deepak John Reji

This paper introduces a Safety and Responsible Large Language Model (\textbf{SR}$_{\text{LLM}}$ ), an approach designed to enhance the safety of LLM-generated content.

Language Modelling Large Language Model +1

FakeWatch: A Framework for Detecting Fake News to Ensure Credible Elections

no code implementations14 Mar 2024 Shaina Raza, Tahniat Khan, Veronica Chatrath, Drai Paulen-Patterson, Mizanur Rahman, Oluwanifemi Bamgbose

Our objective is to provide the research community with adaptable and precise classification models adept at identifying fake news for the elections agenda.

Computational Efficiency Misinformation +1

FakeWatch ElectionShield: A Benchmarking Framework to Detect Fake News for Credible US Elections

no code implementations27 Nov 2023 Tahniat Khan, Mizanur Rahman, Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza

We have created a novel dataset of North American election-related news articles through a blend of advanced language models (LMs) and thorough human verification, for precision and relevance.

Benchmarking Computational Efficiency +1

She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models

no code implementations20 Oct 2023 Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza

Additionally, implementing a test suite such as ours lowers the environmental overhead of making models safe and fair.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.