[Submitted on 24 Oct 2025]
LAVSM: Layer-Adaptive Variance-Stabilized Momentum for Language Model Optimization
View PDFAbstract:We introduce Layer-Adaptive Variance-Stabilized Momentum (LAVSM), an optimizer for language model training that combines layer-specific scaling with variance stabilization. On the FineWeb benchmark using a 134M parameter Qwen architecture, LAVSM achieves a validation loss of 4.899, showing modest improvements over AdamW (4.927) and Lion (6.114) baselines. Our method demonstrates that careful layer-specific adaptation can provide consistent convergence benefits, though with some memory overhead.
Submission history
[v1] Fri, 24 Oct 2025 06:08 UTC