Search Results for author: Rohan Ajwani

Found 1 papers, 1 papers with code

LLM-Generated Black-box Explanations Can Be Adversarially Helpful

1 code implementation10 May 2024 Rohan Ajwani, Shashidhar Reddy Javaji, Frank Rudzicz, Zining Zhu

Most LLMs are not able to find alternative paths along simple graphs, indicating that their misleading explanations aren't produced by only logical deductions using complex knowledge.

Navigate

Cannot find the paper you are looking for? You can Submit a new open access paper.