[Submitted on 4 Nov 2025]
Analysis of Hybrid Orthogonal-AdamW Optimization for Language Models
View PDFAbstract:We present a detailed empirical study of a hybrid optimizer combining AdamW with orthogonal gradient updates for transformer attention layers. Our comprehensive evaluation on the FineWeb benchmark using a 134M parameter Qwen model reveals that while the method shows interesting theoretical properties, it achieves a final validation loss of 5.801, underperforming both the AdamW baseline (4.927) and state-of-the-art approaches. We provide complete implementation details, thorough ablation studies, and analysis of the method's limitations to facilitate future research in constrained optimization for language models.
Submission history
[v1] Tue, 4 Nov 2025 23:36 UTC