InfinityMATH: A Scalable Instruction Tuning Dataset in Pr... | InfinityMATH: A Scalable Instruction Tuning Dataset in Pr...
InfinityMATH: A Scalable Instruction Tuning Dataset in Programmatic Mathematical Reasoning
Published on Aug 9
·
Submitted by
akhaliq
on Aug 15
Authors:
Bo-Wen Zhang
,
Yan Yan
,

Lin Li
,
Guang Liu
Abstract
Recent advancements in Chain-of-Thoughts (CoT) and Program-of-Thoughts (PoT) methods have greatly enhanced language models' mathematical reasoning capabilities, facilitating their integration into instruction tuning datasets with LLMs. However, existing methods for large-scale dataset creation require substantial seed data and high computational costs for data synthesis, posing significant challenges for scalability. We introduce InfinityMATH, a scalable instruction tuning dataset for programmatic mathematical reasoning. The construction pipeline emphasizes decoupling numbers from mathematical problems to synthesize number-independent programs, enabling efficient and flexible scaling while minimizing dependency on specific numerical values. Fine-tuning experiments with open-source language and code models, such as Llama2 and CodeLlama, demonstrate the practical benefits of InfinityMATH. These fine-tuned models, showed significant relative improvements on both in-domain and out-of-domain benchmarks, ranging from 184.7% to 514.3% on average. Additionally, these models exhibited high robustness on the GSM8K+ and MATH+ benchmarks, which are enhanced version of test sets with simply the number variations. InfinityMATH ensures that models are more versatile and effective across a broader range of mathematical problems. The data is available at https://huggingface.co/datasets/flagopen/InfinityMATH. flagopen/InfinityMATH · Datasets at Hugging Face