Preference Tuning LLMs with Direct Preference Optimizatio... | Preference Tuning LLMs with Direct Preference Optimizatio...
Preference Tuning LLMs with Direct Preference Optimization Methods
Addendum

After consulting with the authors of the IPO paper, we discovered that the implementation of IPO in TRL was incorrect; in particular, the loss over the log-likelihoods of the completions needs to be averaged instead of summed. We have added a fix in this PR and re-run the experiments. The results are now consistent with the paper, with IPO on par with DPO and performing better than KTO in the paired preference setting. We have updated the post to reflect these new results.

TL;DR

We evaluate three promising methods to align language models without reinforcement learning (or preference tuning) on a number of models and hyperparameter settings. In particular we train using different hyperparameters and evaluate on:

Direct Preference Optimization (DPO)
Identity Preference Optimisation (IPO)
Kahneman-Tversky Optimisation (KTO)
https://github.com/huggingface/blog/blob/main/pref-tuning.md blog/pref-tuning.md at main · huggingface/blog