← Previous · All Episodes · Next →
MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion Episode 502

MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion

· 23:02

|

🤗 Upvotes: 13 | cs.CL

Authors:
Xintong Hao, Ke Shen, Chenggang Li

Title:
MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion

Arxiv:
http://arxiv.org/abs/2502.04235v1

Abstract:
Despite the remarkable capabilities of large language models across various tasks, their continued scaling faces a critical challenge: the scarcity of high-quality pretraining data. While model architectures continue to evolve, the natural language data struggles to scale up. To tackle this bottleneck, we propose \textbf{MA}ssive \textbf{G}enre-\textbf{A}udience~(MAGA) reformulation method, which systematic synthesizes diverse, contextually-rich pretraining data from existing corpus. This work makes three main contributions: (1) We propose MAGA reformulation method, a lightweight and scalable approach for pretraining corpus expansion, and build a 770B tokens MAGACorpus. (2) We evaluate MAGACorpus with different data budget scaling strategies, demonstrating consistent improvements across various model sizes (134M-13B), establishing the necessity for next-generation large-scale synthetic pretraining language models. (3) Through comprehensive analysis, we investigate prompt engineering's impact on synthetic training collapse and reveal limitations in conventional collapse detection metrics using validation losses. Our work shows that MAGA can substantially expand training datasets while maintaining quality, offering a reliably pathway for scaling models beyond data limitations.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →