← Previous · All Episodes · Next →
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models Episode 356

REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models

· 21:46

|

🤗 Upvotes: 51 | cs.CL, cs.LG

Authors:
Jian Hu

Title:
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models

Arxiv:
http://arxiv.org/abs/2501.03262v1

Abstract:
Reinforcement Learning from Human Feedback (RLHF) has emerged as a critical approach for aligning large language models with human preferences, witnessing rapid algorithmic evolution through methods such as Proximal Policy Optimization (PPO), Direct Preference Optimization (DPO), REINFORCE Leave One-Out (RLOO), ReMax, and Group Relative Policy Optimization (GRPO). We present REINFORCE++, an enhanced variant of the classical REINFORCE algorithm that incorporates key optimization techniques from PPO while eliminating the need for a critic network. REINFORCE++ achieves three primary objectives: (1) simplicity (2) enhanced training stability, and (3) reduced computational overhead. Through extensive empirical evaluation, we demonstrate that REINFORCE++ exhibits superior stability compared to GRPO and achieves greater computational efficiency than PPO while maintaining comparable performance. The implementation is available at \url{https://github.com/OpenRLHF/OpenRLHF}.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →