no code implementations • 9 May 2023 • Zhang Ze Yu, Lau Jia Jaw, Zhang Hui, Bryan Kian Hsiang Low
Reinforcement Learning with Human Feedback (RLHF) has been demonstrated to significantly enhance the performance of large language models (LLMs) by aligning their outputs with desired human values through instruction tuning.