Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer
Aligning generative models with human preference via RLHF typically suffers from overoptimization, where an imperfectly learned reward model can misguide the generative model to output undesired responses. We investigate this problem in a principled manner by identifying the source of the misalignme...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Aligning generative models with human preference via RLHF typically suffers
from overoptimization, where an imperfectly learned reward model can misguide
the generative model to output undesired responses. We investigate this problem
in a principled manner by identifying the source of the misalignment as a form
of distributional shift and uncertainty in learning human preferences. To
mitigate overoptimization, we first propose a theoretical algorithm that
chooses the best policy for an adversarially chosen reward model; one that
simultaneously minimizes the maximum likelihood estimation of the loss and a
reward penalty term. Here, the reward penalty term is introduced to prevent the
policy from choosing actions with spurious high proxy rewards, resulting in
provable sample efficiency of the algorithm under a partial coverage style
condition. Moving from theory to practice, the proposed algorithm further
enjoys an equivalent but surprisingly easy-to-implement reformulation. Using
the equivalence between reward models and the corresponding optimal policy, the
algorithm features a simple objective that combines: (i) a preference
optimization loss that directly aligns the policy with human preference, and
(ii) a supervised learning loss that explicitly imitates the policy with a
(suitable) baseline distribution. In the context of aligning large language
models (LLM), this objective fuses the direct preference optimization (DPO)
loss with the supervised fine-tuning (SFT) loss to help mitigate the
overoptimization towards undesired responses, for which we name the algorithm
Regularized Preference Optimization (RPO). Experiments of aligning LLMs
demonstrate the improved performance of RPO compared with DPO baselines. Our
work sheds light on the interplay between preference optimization and SFT in
tuning LLMs with both theoretical guarantees and empirical evidence. |
---|---|
DOI: | 10.48550/arxiv.2405.16436 |