Exploring the Influence of Synthetic Training Data Diversity on the Behavior of Fine-Tuned Large Language Models
Summary
In recent years, large language models (LLMs) have become an extremely popular and active subdiscipline of artificial intelligence (AI). As LLMs become more capable, they are increasingly being used to generate data for further LLM training, complementing or replacing human-written text. However, as synthetic text differs systematically from human-written text, large language models trained or fine-tuned on this data can start behaving in unexpected ways: For instance, their output distribution shifts away from the distribution of human-written text, a phenomenon previous research has termed “model collapse”. The research on model collapse thus far has mostly focused on single-source scenarios, that is, the repeated training of LLMs on their own outputs, which has been shown to induce model collapse. This thesis investigates the usage of multi-source synthetic data, so data generated by multiple source models, as a strategy for mitigating model collapse. The efficacy of this approach is investigated from different angles: Experiment 1 focuses on a diverse range of metrics for measuring model collapse directly, while Experiment 2 investigates the impact of different fine-tuning regimes on model safety, and Experiment 3 examines the implications for LLM self-preference bias. We find compelling evidence indicating the efficacy of multi-source synthetic data for mitigating model collapse. We also describe various complex interactions between synthetic data source diversity, the size of data-generating models, and the size of fine-tuned models, with varying implications for model safety and self-preference bias. Finally, we show the importance of metric choice for the study of model collapse, with different measurement approaches yielding varying outcomes.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Modeling dual-task performance: do individualized models predict dual-task performance better than average models?
Cao, W. (2017)Understanding multitasking can be a complicated venture. The goal of this paper is to see whether using individual parameters for modeling dual-task will lead to better predictions of individual performance compared to ... -
Modelling Wastewater Quantity and Quality in Mexico -- using an agent-based model
Chen, Y. (2021)Wastewater is a key element in regional and global water circles, and the discharge of a large quantity of untreated wastewater is posing serious threats to the environment and public health in Mexico. To have a thorough ... -
Modelling offshore wind in the IMAGE/TIMER model
Gernaat, D.E.H.J. (2012)Current global energy consumption is expected to continue to grow as the global population is likely to increase towards 9 billion in 2050 while income levels per capita surge with 3-5% per year. Resource depletion, climate ...