no code implementations • 1 Mar 2024 • Kedi Chen, Qin Chen, Jie zhou, Yishen He, Liang He
Since large language models (LLMs) achieve significant success in recent years, the hallucination issue remains a challenge, numerous benchmarks are proposed to detect the hallucination.