[Submitted on 2 Nov 2025]
Analyzing Layer-Adaptive Optimization: When Simple Combinations Fall Short
View PDFAbstract:This work presents a systematic investigation of layer-adaptive optimization techniques for language models. We examine whether combining existing approaches - layer-specific learning rates, variance stabilization, and orthogonalization - can improve upon AdamW. Our experiments reveal that while careful tuning yields modest improvements over AdamW (4.93 vs 5.50 validation loss), the approach falls short of state-of-the-art methods like muon (3.54). We provide detailed analysis of why these intuitive combinations fail to deliver significant gains, offering insights for future optimizer design.
Submission history
[v1] Sun, 2 Nov 2025 04:42 UTC