[Submitted on 5 Nov 2025]
Revisiting AdamW: A Rigorous Examination of Hyperparameter Sensitivity in Language Model Optimization
View PDFAbstract:This paper presents a comprehensive analysis of AdamW hyperparameter sensitivity in transformer language model training. Through systematic ablation studies across 27 configurations on the FineWeb dataset, we quantify the impact of learning rate, momentum parameters ($\\beta_1$, $\\beta_2$), and weight decay on final model performance. Our experiments on a 134M parameter Qwen architecture reveal that while careful tuning yields statistically significant improvements (p < 0.05, paired t-test), the absolute gains remain modest (0.07\\% reduction in validation loss) compared to state-of-the-art optimization approaches.
Submission history
[v1] Wed, 5 Nov 2025 19:16 UTC