Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment
Recent advances in aligning large language models with human preferences have corroborated the growing importance of best-of-N distillation (BOND). However, the iterative BOND algorithm is prohibitively expensive in practice due to the sample and computation inefficiency. This paper addresses the pr...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent advances in aligning large language models with human preferences have
corroborated the growing importance of best-of-N distillation (BOND). However,
the iterative BOND algorithm is prohibitively expensive in practice due to the
sample and computation inefficiency. This paper addresses the problem by
revealing a unified game-theoretic connection between iterative BOND and
self-play alignment, which unifies seemingly disparate algorithmic paradigms.
Based on the connection, we establish a novel framework, WIN rate Dominance
(WIND), with a series of efficient algorithms for regularized win rate
dominance optimization that approximates iterative BOND in the parameter space.
We provides provable sample efficiency guarantee for one of the WIND variant
with the square loss objective. The experimental results confirm that our
algorithm not only accelerates the computation, but also achieves superior
sample efficiency compared to existing methods. |
---|---|
DOI: | 10.48550/arxiv.2410.20727 |