How to fine-tune: Focus on effective datasets?This is the... | How to fine-tune: Focus on effective datasets?This is the...
How to fine-tune: Focus on effective datasets?
This is the third blog post in a series about adapting open source large language models (LLMs). In this post, we explore some rules of thumb for curating a good training dataset.

In Part 1, we took a look at prevalent approaches for adapting language models to domain data.
In Part 2, we discussed how to determine if fine-tuning is the right approach for your use case.
Introduction

Fine-tuning LLMs is a mix of art and science, with best practices in the field still emerging. In this blog post, we’ll highlight design variables for fine-tuning and give directional guidance on best practices we’ve seen so far to fine-tune models with resource constraints. We recommend using the information below as a starting point to strategize your fine-tuning experiments.

Full fine-tuning vs. parameter-efficient fine-tuning (PEFT)

Both full fine-tuning and PEFT have shown improvements in downstream performance when applied to new domains in both academic and practical settings. Choosing one boils down to compute available (in GPU hours and GPU memory), performance on tasks other than the target downstream task (the learning-forgetting tradeoff) and human annotation costs.

Full fine-tuning is more prone to suffer from two problems: model collapse and catastrophic forgetting. Model collapse is where the model output converges to a limited set of outputs and the tail of the original content distribution disappears. Catastrophic forgetting, as discussed in Part 1 of this series, leads to the model losing its abilities. Some early empirical studies show that full fine-tuning techniques are more prone to the above mentioned issues as compared to PEFT techniques, though more research needs to be done.

PEFT techniques serve as natural regularizers for fine-tuning by design. PEFT often costs relatively less compute to train a downstream model and is much more accessible for a resource-constrained scenario with limited dataset sizes. In some cases, full fine-tuning has shown better performance at the specific task of interest, often at the cost of forgetting some of the capabilities of the original model. This “learning-forgetting” tradeoff between the specific downstream task performance and performance on other tasks is explored deeply in the comparison of LoRA and full fine-tuning in this paper.

Given resource constraints, PEFT techniques will likely give a better performance boost/cost ratio as compared to full fine-tuning. If downstream performance is of paramount importance with resource constraints, full fine-tuning will be the most effective. In either scenario, the key is to create a high-quality dataset keeping the following key principles in mind.